- Accessibility of clinics
- Incentives to continue
For a comprehensive collection, see catalogofbias.org .
Here are some noteworthy examples of study bias from the literature: An example of information bias was observed when in 1998 an alleged association between the measles, mumps, and rubella (MMR) vaccine and autism was reported. Recall bias (a subtype of information bias) emerged when parents of autistic children recalled the onset of autism after an MMR vaccination more often than parents of similar children who were diagnosed prior to the media coverage of that controversial and meanwhile retracted study [ 51 ]. A study from 2001 showed better survival for academy award-winning actors, but this was due to immortal time bias that favors the treatment or exposure group [ 52 , 53 ]. A study systematically investigated self-reports about musculoskeletal symptoms and found the presence of information bias. The reason was that participants with little computer-time overestimated, and participants with a lot of computer-time spent underestimated their computer usage [ 54 ].
Information bias can be mitigated by using objective rather than subjective measurements. Standardized operating procedures (SOP) and electronic lab notebooks additionally help to follow well-designed protocols for data collection and handling [ 55 ]. Despite the failure to mitigate bias in studies, complete descriptions of data and methods can at least allow the assessment of risk of bias.
Rule 6: avoid questionable research practices.
Questionable research practices (QRPs) can lead to exaggerated findings and false conclusions and thus lead to irreproducible research. Often, QRPs are used with no bad intentions. This becomes evident when methods sections explicitly describe such procedures, for example, to increase the number of samples until statistical significance is reached that supports the hypothesis. Therefore, it is important that researchers know about QRPs in order to recognize and avoid them.
Several questionable QRPs have been named [ 56 , 57 ]. Among them are low statistical power, pseudoreplication, repeated inspection of data, p -hacking [ 58 ], selective reporting, and hypothesizing after the results are known (HARKing).
The first 2 QRPs, low statistical power and pseudoreplication, can be prevented by proper planning and designing of studies, including sample size calculation and appropriate statistical methodology to avoid treating data as independent when in fact they are not. Statistical power is not equal to reproducibility, but statistical power is a precondition of reproducibility as the lack thereof can result in false negative as well as false positive findings (see Rule 3 ).
In fact, a lot of QRP can be avoided with a study protocol and statistical analysis plan. Preregistration, as described in Rule 2, is considered best practice for this purpose. However, many of these issues can additionally be rooted in institutional incentives and rewards. Both funding and promotion are often tied to the quantity rather than the quality of the research output. At universities, still only few or no rewards are given for writing and registering protocols, sharing data, publishing negative findings, and conducting replication studies. Thus, a wider “culture change” is needed.
It would help if more researchers were familiar with correct interpretations and possible misinterpretations of statistical tests, p -values, confidence intervals, and statistical power [ 59 , 60 ]. A statistically significant p -value does not necessarily mean that there is a clinically or biologically relevant effect. Specifically, the traditional dichotomization into statistically significant ( p < 0.05) versus statistically nonsignificant ( p ≥ 0.05) results is seldom appropriate, can lead to cherry-picking of results and may eventually corrupt science [ 61 ]. We instead recommend reporting exact p -values and interpreting them in a graded way in terms of the compatibility of the null hypothesis with the data [ 62 , 63 ]. Moreover, a p -value around 0.05 (e.g., 0.047 or 0.055) provides only little information, as is best illustrated by the associated replication power: The probability that a hypothetical replication study of the same design will lead to a statistically significant result is only 50% [ 64 ] and is even lower in the presence of publication bias and regression to the mean (the phenomenon that effect estimates in replication studies are often smaller than the estimates in the original study) [ 65 ]. Claims of novel discoveries should therefore be based on a smaller p -value threshold (e.g., p < 0.005) [ 66 ], but this really depends on the discipline (genome-wide screenings or studies in particle physics often apply much lower thresholds).
Generally, there is often too much emphasis on p -values. A statistical index such as the p -value is just the final product of an analysis, the tip of the iceberg [ 67 ]. Statistical analyses often include many complex stages, from data processing, cleaning, transformation, addressing missing data, modeling, to statistical inference. Errors and pitfalls can creep in at any stage, and even a tiny error can have a big impact on the result [ 68 ]. Also, when many hypothesis tests are conducted (multiple testing), false positive rates may need to be controlled to protect against wrong conclusions, although adjustments for multiple testing are debated [ 69 – 71 ].
Thus, a p -value alone is not a measure of how credible a scientific finding is [ 72 ]. Instead, the quality of the research must be considered, including the study design, the quality of the measurement, and the validity of the assumptions that underlie the data analysis [ 60 , 73 ]. Frameworks exist that help to systematically and transparently assess the certainty in evidence; the most established and widely used one is Grading of Recommendations, Assessment, Development and Evaluations (GRADE; www.gradeworkinggroup.org ) [ 74 ].
Training in basic statistics, statistical programming, and reproducible analyses and better involvement of data professionals in academia is necessary. University departments sometimes have statisticians that can support researchers. Importantly, statisticians need to be involved early in the process and on an equal footing and not just at the end of a project to perform the final data analysis.
In reality, science often lacks transparency. Open science makes the process of producing evidence and claims transparent and accessible to others [ 75 ]. Several universities and research funders have already implemented open science roadmaps to advocate free and public science as well as open access to scientific knowledge, with the aim of further developing the credibility of research. Open research allows more eyes to see it and critique it, a principle similar to the “Linus’s law” in software development, which says that if there are enough people to test a software, most bugs will be discovered.
As science often progresses incrementally, writing and sharing a study protocol and making data and methods readily available is crucial to facilitate knowledge building. The Open Science Framework (osf.io) is a free and open-source project management tool that supports researchers throughout the entire project life cycle. OSF enables preregistration of study protocols and sharing of documents, data, analysis code, supplementary materials, and preprints.
To facilitate reproducibility, a research paper can link to data and analysis code deposited on OSF. Computational notebooks are now readily available that unite data processing, data transformations, statistical analyses, figures and tables in a single document (e.g., R Markdown, Jupyter); see also the 10 simple rules for reproducible computational research [ 76 ]. Making both data and code open thus minimizes waste of funding resources and accelerates science.
Open science can also advance researchers’ careers, especially for early-career researchers. The increased visibility, retrievability, and citations of datasets can all help with career building [ 77 ]. Therefore, institutions should provide necessary training, and hiring committees and journals should align their core values with open science, to attract researchers who aim for transparent and credible research [ 78 ].
Rule 9: report all findings.
Publication bias occurs when the outcome of a study influences the decision whether to publish it. Researchers, reviewers, and publishers often find nonsignificant study results not interesting or worth publishing. As a consequence, outcomes and analyses are only selectively reported in the literature [ 79 ], also known as the file drawer effect [ 80 ].
The extent of publication bias in the literature is illustrated by the overwhelming frequency of statistically significant findings [ 81 ]. A study extracted p -values from MEDLINE and PubMed Central and showed that 96% of the records reported at least 1 statistically significant p -value [ 82 ], which seems implausible in the real world. Another study plotted the distribution of more than 1 million z -values from Medline, revealing a huge gap from −2 to 2 [ 83 ]. Positive studies (i.e., statistically significant, perceived as striking or showing a beneficial effect) were 4 times more likely to get published than negative studies [ 84 ].
Often a statistically nonsignificant result is interpreted as a “null” finding. But a nonsignificant finding does not necessarily mean a null effect; absence of evidence is not evidence of absence [ 85 ]. An individual study may be underpowered, resulting in a nonsignificant finding, but the cumulative evidence from multiple studies may indeed provide sufficient evidence in a meta-analysis. Another argument is that a confidence interval that contains the null value often also contains non-null values that may be of high practical importance. Only if all the values inside the interval are deemed unimportant from a practical perspective, then it may be fair to describe a result as a null finding [ 61 ]. We should thus never report “no difference” or “no association” just because a p -value is larger than 0.05 or, equivalently, because a confidence interval includes the “null” [ 61 ].
On the other hand, studies sometimes report statistically nonsignificant results with “spin” to claim that the experimental treatment is beneficial, often by focusing their conclusions on statistically significant differences on secondary outcomes despite a statistically nonsignificant difference for the primary outcome [ 86 , 87 ].
Findings that are not being published have a tremendous impact on the research ecosystem, distorting our knowledge of the scientific landscape by perpetuating misconceptions, and jeopardizing judgment of researchers and the public trust in science. In clinical research, publication bias can mislead care decisions and harm patients, for example, when treatments appear useful despite only minimal or even absent benefits reported in studies that were not published and thus are unknown to physicians [ 88 ]. Moreover, publication bias also directly affects the formulation and proliferation of scientific theories, which are taught to students and early-career researchers, thereby perpetuating biased research from the core. It has been shown in modeling studies that unless a sufficient proportion of negative studies are published, a false claim can become an accepted fact [ 89 ] and the false positive rates influence trustworthiness in a given field [ 90 ].
In sum, negative findings are undervalued. They need to be more consistently reported at the study level or be systematically investigated at the systematic review level. Researchers have their share of responsibilities, but there is clearly a lack of incentives from promotion and tenure committees, journals, and funders.
Study reports need to faithfully describe the aim of the study and what was done, including potential deviations from the original protocol, as well as what was found. Yet, there is ample evidence of discrepancies between protocols and research reports, and of insufficient quality of reporting [ 79 , 91 – 95 ]. Reporting deficiencies threaten our ability to clearly communicate findings, replicate studies, make informed decisions, and build on existing evidence, wasting time and resources invested in the research [ 96 ].
Reporting guidelines aim to provide the minimum information needed on key design features and analysis decisions, ensuring that findings can be adequately used and studies replicated. In 2008, the Enhancing the QUAlity and Transparency Of Health Research (EQUATOR) network was initiated to provide reporting guidelines for a variety of study designs along with guidelines for education and training on how to enhance quality and transparency of health research. Currently, there are 468 reporting guidelines listed in the network; see the most prominent guidelines in Table 2 . Furthermore, following the ICMJE recommendations, medical journals are increasingly endorsing reporting guidelines [ 97 ], in some cases making it mandatory to submit the appropriate reporting checklist along with the manuscript.
Guideline name | Study type |
---|---|
ARRIVE | Animal experiments |
CONSORT | Randomized trials |
STROBE | Observational studies |
PRISMA | Systematic reviews |
SPIRIT | Study protocols |
STARD/TRIPOID | Diagnostic/prognostic studies |
The EQUATOR Network is a library with more than 400 reporting guidelines in health research ( www.equator-network.org ).
The use of reporting guidelines and journal endorsement has led to a positive impact on the quality and transparency of research reporting, but improvement is still needed to maximize the value of research [ 98 , 99 ].
Originally, this paper targeted early-career researchers; however, throughout the development of the rules, it became clear that the present recommendations can serve all researchers irrespective of their seniority. We focused on practical guidelines for planning, conducting, and reporting of research. Others have aligned GRP with similar topics [ 100 , 101 ]. Even though we provide 10 simple rules, the word “simple” should not be taken lightly. Putting the rules into practice usually requires effort and time, especially at the beginning of a research project. However, time can also be redeemed, for example, when certain choices can be justified to reviewers by providing a study protocol or when data can be quickly reanalyzed by using computational notebooks and dynamic reports.
Researchers have field-specific research skills, but sometimes are not aware of best practices in other fields that can be useful. Universities should offer cross-disciplinary GRP courses across faculties to train the next generation of scientists. Such courses are an important building block to improve the reproducibility of science.
This article was written along the Good Research Practice (GRP) courses at the University of Zurich provided by the Center of Reproducible Science ( www.crs.uzh.ch ). All materials from the course are available at https://osf.io/t9rqm/ . We appreciated the discussion, development, and refinement of this article within the working group “training” of the SwissRN ( www.swissrn.org ). We are grateful to Philip Bourne for a lot of valuable comments on the earlier versions of the manuscript.
S.S. received funding from SfwF (Stiftung für wissenschaftliche Forschung an der Universität Zürich; grant no. STWF-19-007). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
IMAGES
VIDEO
COMMENTS
Learn a step-by-step method for turning research results into a published paper using the LEAP writing approach. The LEAP approach helps you first think about and then write each section of the research paper, from laying out the facts to preparing for submission.
Rule 1: Make It a Driving Force. Never separate writing a paper from the underlying research. After all, writing and research are integral parts of the overall enterprise. Therefore, design a project with an ultimate paper firmly in mind.
Learn the steps to write a research paper, from choosing a topic to proofreading your draft. This guide covers the basics of academic writing, research, and argumentation with examples and tips.
Learn how to write a research paper properly with this concise guide that covers topics like choosing a topic, gathering sources, writing a thesis, and citing evidence. Find out the difference between a research paper and a research proposal, and get tips on formatting, length, and style.
Learn how to write a research paper with this comprehensive guide that covers the definition, structure, methods, and examples of a research paper. Find out how to choose a topic, conduct a literature review, develop a thesis, collect and analyze data, organize your paper, and cite your sources.
Learn how to choose a topic, conduct research, define a question, write a thesis, and structure your paper with this comprehensive guide. Find tips on formatting, citation, and ethical norms for your research paper.
Learn how to format your research paper according to APA, MLA, or Chicago style. Download free templates and get tips on title page, headings, citations, and more.
Research Paper Structure: A Snapshot. Before diving into the individual components, let's take a quick look at the full structure of a research paper. This snapshot will help you visualize how each section fits together to form a cohesive and well-organized paper. Introduction. Introduce your topic and research question.
Learn how to write an effective introduction for a research paper, whether it's argumentative or empirical. Follow the five steps to introduce your topic, provide background, establish your problem, specify your objective, and map out your paper.
One of the stumbling blocks is the beginning of the process and creating the first draft. This paper presents guidelines on how to initiate the writing process and draft each section of a research manuscript. The paper discusses seven rules that allow the writer to prepare a well-structured and comprehensive manuscript for a publication submission.
Learn how to choose a topic, plan, research, write, and edit a research paper with this comprehensive guide. Find out the difference between report and thesis papers, the reasons for writing a research paper, and the tips and tricks to make your paper stand out.
Learn the stages and skills involved in writing a library-based research paper, from finding and selecting sources to revising and documenting. This web page also provides links to other resources for academic and professional writing.
Learn how to choose a topic, do research, write a thesis statement, create an outline, and structure your paper with this comprehensive guide. Find tips and examples for academic writing that involves information analysis, source evaluation, and argument explanation.
These rules are designed to make your paper more influential and the process of writing more efficient and pleasurable. Introduction. Writing and reading papers are key skills for scientists. ... It is also the bridge that connects the experimental phase of a research effort with the paper-writing phase. Thus, it is useful to formalize the ...
Learn how to choose a topic, find sources, organize your notes, write a draft, and revise your paper with this guide from the Writing Center at Potsdam. Follow the steps to understand the assignment, set a schedule, begin research, construct an outline, write a draft, and write a final draft.
Learn how to write a research paper with this comprehensive guide from Purdue OWL. Find out how to choose a topic, identify an audience, and follow the steps of the writing process.
Learn how to create and format APA Style student papers with this step-by-step guide. See annotated diagrams, sample papers, and tips for title page, text, tables, figures, and reference list.
Learn how to write a research paper for a peer-reviewed journal, with tips on structure, content, and style. A research paper is a highly codified rhetorical form that addresses a specific research question and follows the IMRAD format: Introduction, Methods, Results, and Discussion.
Learn how to choose a topic, find sources, make an outline, and write a research paper with this helpful guide. Get tips on citation, analysis, discussion, and conclusion, and use free online tools for assistance.
This article offers advice on how to write and publish papers in computational biology, covering topics such as project design, audience, logic, completeness, and self-containment. It is not about the mechanics of composing a paper, but about the principles and attitude that can help guide the process of writing and research.
Learn the basics of writing a research paper for an academic course or journal, including how to choose a topic, conduct research, determine a thesis and build an outline. Follow the steps and tips to create a strong and informative paper that meets the guidelines and rules of your assignment.
Properly Quoting And Paraphrasing. When you use someone else's ideas or words, you must cite them. Here are some tips: Quotation: Use the exact words from a source, placing them in quotation marks and citing the source.; Paraphrasing: Rewrite the idea in your own words while still giving credit to the original author.; Summarizing: Condense the main ideas of a source into a brief overview, and ...
validity of the results during the research. As a result, the overall research may need. to be adjusted, the project design may be. revised, new methods may be devised, and. new data may be ...
Learn how to write a scientific article for publication in a medical journal, from the background work to the final submission. This guide covers the main sections, elements and tips for each section, with examples and references.
Rule 4: Write a data management plan. In 2020, 2 Coronavirus Disease 2019 (COVID-19) papers in leading medical journals were retracted after major concerns about the data were raised [42]. Today, raw data are more often recognized as a key outcome of research along with the paper.