Doctors and patients need to be able to trust reports of medical research because these are used to help them make decisions about treatments. It is therefore important to prevent false or misleading research. Problems with research include various types of misconduct such as altering results (falsification), making up results (fabrication) or copying other people's work (plagiarism). Good systems that produce reliable research are said to show 'research integrity'. We studied activities, such as training, designed to reduce research misconduct and encourage integrity. The effects of some of these activities on researchers' attitudes, knowledge and behaviour have been studied and we brought together the evidence from these studies.
Some studies showed positive effects on researchers' attitudes to plagiarism. Practical training, such as using computer programs that can detect plagiarism, or writing exercises, sometimes decreased plagiarism by students but not all studies showed positive effects. We did not find any studies on fabrication or falsification. Two studies showed that the way in which journals ask authors for details about who did each part of a study can affect their responses.
Many of the studies included in this review had problems such as small sample sizes or had used methods that might produce biased results. The training methods tested in the studies (which included online courses, lectures and discussion groups) were often not clearly described. Most studies tested effects over short time periods. Many studies involved university students rather than active researchers.
In summary, the available evidence is of very low quality, so the effect of any intervention for preventing misconduct and promoting integrity in research and publication is uncertain. However, practical training about how to avoid plagiarism may be effective in reducing plagiarism by students, although we do not know whether it has long-term effects.
The evidence base relating to interventions to improve research integrity is incomplete and the studies that have been done are heterogeneous, inappropriate for meta-analyses and their applicability to other settings and population is uncertain. Many studies had a high risk of bias because of the choice of study design and interventions were often inadequately reported. Even when randomized designs were used, findings were difficult to generalize. Due to the very low quality of evidence, the effects of training in responsible conduct of research on reducing research misconduct are uncertain. Low quality evidence indicates that training about plagiarism, especially if it involves practical exercises and use of text-matching software, may reduce the occurrence of plagiarism.
Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health.
Our objective was to evaluate the effectiveness of educational or policy interventions in research integrity or responsible conduct of research on the behaviour and attitudes of researchers in health and other research areas.
We searched the CENTRAL, MEDLINE, LILACS and CINAHL health research bibliographical databases, as well as the Academic Search Complete, AGRICOLA, GeoRef, PsycINFO, ERIC, SCOPUS and Web of Science databases. We performed the last search on 15 April 2015 and the search was limited to articles published between 1990 and 2014, inclusive. We also searched conference proceedings and abstracts from research integrity conferences and specialized websites. We handsearched 14 journals that regularly publish research integrity research.
We included studies that measured the effects of one or more interventions, i.e. any direct or indirect procedure that may have an impact on research integrity and responsible conduct of research in its broadest sense, where participants were any stakeholders in research and publication processes, from students to policy makers. We included randomized and non-randomized controlled trials, such as controlled before-and-after studies, with comparisons of outcomes in the intervention versus non-intervention group or before versus after the intervention. Studies without a control group were not included in the review.
We used the standard methodological procedures expected by Cochrane. To assess the risk of bias in non-randomized studies, we used a modified Cochrane tool, in which we used four out of six original domains (blinding, incomplete outcome data, selective outcome reporting, other sources of bias) and two additional domains (comparability of groups and confounding factors). We categorized our primary outcome into the following levels: 1) organizational change attributable to intervention, 2) behavioural change, 3) acquisition of knowledge/skills and 4) modification of attitudes/perceptions. The secondary outcome was participants' reaction to the intervention.
Thirty-one studies involving 9571 participants, described in 33 articles, met the inclusion criteria. All were published in English. Fifteen studies were randomized controlled trials, nine were controlled before-and-after studies, four were non-equivalent controlled studies with a historical control, one was a non-equivalent controlled study with a post-test only and two were non-equivalent controlled studies with pre- and post-test findings for the intervention group and post-test for the control group. Twenty-one studies assessed the effects of interventions related to plagiarism and 10 studies assessed interventions in research integrity/ethics. Participants included undergraduates, postgraduates and academics from a range of research disciplines and countries, and the studies assessed different types of outcomes.
We judged most of the included randomized controlled trials to have a high risk of bias in at least one of the assessed domains, and in the case of non-randomized trials there were no attempts to alleviate the potential biases inherent in the non-randomized designs.
We identified a range of interventions aimed at reducing research misconduct. Most interventions involved some kind of training, but methods and content varied greatly and included face-to-face and online lectures, interactive online modules, discussion groups, homework and practical exercises. Most studies did not use standardized or validated outcome measures and it was impossible to synthesize findings from studies with such diverse interventions, outcomes and participants. Overall, there is very low quality evidence that various methods of training in research integrity had some effects on participants' attitudes to ethical issues but minimal (or short-lived) effects on their knowledge. Training about plagiarism and paraphrasing had varying effects on participants' attitudes towards plagiarism and their confidence in avoiding it, but training that included practical exercises appeared to be more effective. Training on plagiarism had inconsistent effects on participants' knowledge about and ability to recognize plagiarism. Active training, particularly if it involved practical exercises or use of text-matching software, generally decreased the occurrence of plagiarism although results were not consistent. The design of a journal's author contribution form affected the truthfulness of information supplied about individuals' contributions and the proportion of listed contributors who met authorship criteria. We identified no studies testing interventions for outcomes at the organizational level. The numbers of events and the magnitude of intervention effects were generally small, so the evidence is likely to be imprecise. No adverse effects were reported.