7 CFR § 3406.20 - Evaluation criteria for research proposals.

The maximum score a research proposal can receive is 150 points. Unless otherwise stated in the annual solicitation published in the Federal Register, the peer review panel will consider the following criteria and weights to evaluate proposals submitted:

Evaluation criterion Weight
(a) Significance of the problem:
This criterion is used to assess the likelihood that the project will advance or have a substantial impact upon the body of knowledge constituting the natural and social sciences undergirding the agricultural, natural resources, and food systems.
(1) Impact—Is the problem or opportunity to be addressed by the proposed project clearly identified, outlined, and delineated? Are research questions or hypotheses precisely stated? Is the project likely to further advance food and agricultural research and knowledge? Does the project have potential for augmenting the food and agricultural scientific knowledge base? Does the project address a State, regional, national, or international problem(s)? Will the benefits to be derived from the project transcend the applicant institution or the grant period? 15 points.
(2) Continuation plans—Are there plans for continuation or expansion of the project beyond USDA support? Are there plans for continuing this line of research or research support activity with the use of institutional funds after the end of the grant? Are there indications of external, non-Federal support? Are there realistic plans for making the project self-supporting? What is the potential for royalty or patent income, technology transfer or university-business enterprises? What are the probabilities of the proposed activity or line of inquiry being pursued by researchers at other institutions? 10 points.
(3) Innovation—Are significant aspects of the project based on an innovative or a non-traditional approach? Does the project reflect creative thinking? To what degree does the venture reflect a unique approach that is new to the applicant institution or new to the entire field of study? 10 points.
(4) Products and results—Are the expected products and results of the project clearly outlined and likely to be of high quality? Will project results be of an unusual or unique nature? Will the project contribute to a better understanding of or an improvement in the quality, distribution, or effectiveness of the Nation's food and agricultural scientific and professional expertise base, such as increasing the participation of women and minorities? 15 points.
(b) Overall approach and cooperative linkages:
This criterion relates to the soundness of the proposed approach and the quality of the partnerships likely to evolve as a result of the project.
(1) Proposed approach—Do the objectives and plan of operation appear to be sound and appropriate relative to the proposed initiative(s) and the impact anticipated? Is the proposed sequence of work appropriate? Does the proposed approach reflect sound knowledge of current theory and practice and awareness of previous or ongoing related research? If the proposed project is a continuation of a current line of study or currently funded project, does the proposal include sufficient preliminary data from the previous research or research support activity? Does the proposed project flow logically from the findings of the previous stage of study? Are the procedures scientifically and managerially sound? Are potential pitfalls and limitations clearly identified? Are contingency plans delineated? Does the timetable appear to be readily achievable? 5 points.
(2) Evaluation—Are the evaluation plans adequate and reasonable? Do they allow for continuous or frequent feedback during the life of the project? Are the individuals involved in project evaluation skilled in evaluation strategies and procedures? Can they provide an objective evaluation? Do evaluation plans facilitate the measurement of project progress and outcomes? 5 points
(3) Dissemination—Does the proposed project include clearly outlined and realistic mechanisms that will lead to widespread dissemination of project results, including national electronic communication systems, publications and presentations at professional society meetings? 5 points.
(4) Partnerships and collaborative efforts—Does the project have significant potential for advancing cooperative ventures between the applicant institution and a USDA agency? Does the project workplan include an effective role for the cooperating USDA agency(s)? Will the project encourage and facilitate better working relationships in the university science community, as well as between universities and the public or private sector? Does the project encourage appropriate multi-disciplinary collaboration? Will the project lead to long-term relationships or cooperative partnerships that are likely to enhance research quality or supplement available resources? 15 points.
(c) Institutional capacity building:
This criterion relates to the degree to which the project will strengthen the research capacity of the applicant institution. In the case of a joint project proposal, it relates to the degree to which the project will strengthen the research capacity of the applicant institution and that of any other institution assuming a major role in the conduct of the project.
(1) Institutional enhancement—Will the project help the institution to advance the expertise of current faculty in the natural or social sciences; provide a better research environment, state-of-the-art equipment, or supplies; enhance library collections related to the area of research; or enable the institution to provide efficacious organizational structures and reward systems to attract, hire and retain first-rate research faculty and students—particularly those from underrepresented groups? 15 points.
(2) Institutional commitment—Is there evidence to substantiate that the institution attributes a high-priority to the project, that the project is linked to the achievement of the institution's long-term goals, that it will help satisfy the institution's high-priority objectives, or that the project is supported by the institution's strategic plans? Will the project have reasonable access to needed resources such as scientific instrumentation, facilities, computer services, library and other research support resources? 15 points.
(d) Personnel Resources 10 Points
This criterion relates to the number and qualifications of the key persons who will carry out the project. Are designated project personnel qualified to carry out a successful project? Are there sufficient numbers of personnel associated with the project to achieve the stated objectives and the anticipated outcomes? Will the project help develop the expertise of young scientists at the doctoral or post-doctorate level?
(e) Budget and cost-effectiveness:
This criterion relates to the extent to which the total budget adequately supports the project and is cost-effective.
(1) Budget—Is the budget request justifiable? Are costs reasonable and necessary? Will the total budget be adequate to carry out project activities? Are the source(s) and amount(s) of non-Federal matching support clearly identified and appropriately documented? For a joint project proposal, is the shared budget explained clearly and in sufficient detail? 10 points.
(2) Cost-effectiveness—Is the proposed project cost-effective? Does it demonstrate a creative use of limited resources, maximize research value per dollar of USDA support, achieve economies of scale, leverage additional funds or have the potential to do so, focus expertise and activity on a high-priority research initiative(s), or promote coalition building for current or future ventures? 5 points.
(f) Overall quality of proposal 5 points
This criterion relates to the degree to which the proposal complies with the application guidelines and is of high quality. Is the proposal enhanced by its adherence to instructions (table of contents, organization, pagination, margin and font size, the 20-page limitation, appendices, etc.); accuracy of forms; clarity of budget narrative; well prepared vitae for all key personnel associated with the project; and presentation (are ideas effectively presented, clearly articulated, thoroughly explained, etc.)?
  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Report

Research Report – Example, Writing Guide and...

Research Problem

Research Problem – Examples, Types and Guide

Thesis

Thesis – Structure, Example and Writing Guide

Ethical Considerations

Ethical Considerations – Types, Examples and...

Purpose of Research

Purpose of Research – Objectives and Applications

Limitations in Research

Limitations in Research – Types, Examples and...

SciELO in Perspective

  • GENERAL HUMANITIES PRESS RELEASES
  • Methodology
  • Editorial team
  • Publication guidelines

How to assess research proposals?

By Lilian Nassi-Calò

Photo: Oliver Tacke .

The peer review of research proposals (grants) aims to judge the merit of projects and researchers and enable the best to be contemplated. The high number of candidates and proposals, however, has caused saturation of the reviewers, who find themselves immersed in increasing numbers of projects, not knowing the best way to assess them.

In a post previously published in this blog, the possibility of making reviews on grant proposals openly available has been discussed, as a way to help researchers devise better proposals, while allowing public recognition of referees and helping to prevent fraud in the appraisal process. This alternative comes from the successful experience of journals which made peer-reviewers comments openly available along with the published article.

Recently, Ewan Birney, director of the European Molecular Biology Laboratory’s European Bioinformatics Institute at Hinxton, UK, asked his Twitter followers for practical suggestions on how to identify the best candidates from hundreds of research grants submissions received by his institution. To his surprise, the scientific community responded enthusiastically with many suggestions which, in turn, led to other comments on Twitter. The experience was reported in Nature 1 , which is also receiving comments on its page.

Birney 2 started the debate on Twitter asking about a proxy for quality other than the journal title to assess the candidates’ competence, whose articles combined added up to about 2,500 overall. Yoav Gilad 3 , a geneticist at the University of Chicago, IL, US, advised him to read the 2,500 abstracts or the papers, even if it meant including more referees in the assessment process. Birney said that he considered it not feasible, although correct. Birney thinks, like many, that the journal’s title or its Impact Factor (IF) does not necessarily reflect the individual quality of the papers. Moreover, his task is even more difficult, because it includes assessing proposals that do not fall exactly within his area of ​​expertise. “Of course, even if I was using journal as proxy here it wouldn’t help me – everyone here has published ‘well’”.

The discussion continues on Twitter with a suggestion from Stephen Curry 4 , a structural biologist at the Imperial College in London about asking candidates to identify their four most relevant publications and justify their choices in a single page report. Richard Sever 5 , co-founder of the Cold Spring Harbor Laboratory (CSHL) bioRxiv biomedical articles’ repository and assistant-director of CSHL Press considered it a good idea, pointing out, however, that this method could actually select candidates good at writing one page summaries.

The biggest concern, according to Birney, in using citation based metrics, as suggested by many researchers, lies in the fact that they vary considerably between disciplines and may not be comparable in a heterogeneous sample. Hugo Hilton 6 , an immunologist at the University of Stanford at CA, US, expressed his concern, as a candidate, that the selection processes are subject to not totally clear criteria and classic biases as the prestige of journals where applicants publish. It is worth, at this point, mentioning the Declaration on Research Assessment (DORA) of 2012 7 , in which members of the American Society for Cell Biology pledged not to use the IF to evaluate researchers in grant proposals, career promotions and hiring, precisely to avoid distortions. Up to now the Declaration was signed by over 150 prominent scientists and 80 academic organizations.

Birney says that the referees should have a certain degree of autonomy to assess the proposals and there is no problem if all of them do not follow exactly the same procedures in their assessments. “I would prefer subjective but unbiased opinions, and five of them with different criteria than trying to unify the criteria so we all agree with the same answers.” However, he points out, transparency in the process is essential.

Despite being aware of the problems in using journals prestige as a proxy for quality, Birney believes that its use is unavoidable due to the large volume of proposals and candidates. He also advises candidates to highlight their achievements clearly in the proposal, rather than just pinpoint journal titles from their publications list.

The paper on Nature received several comments suggesting ways to speed up the evaluation process and come up with shortlists. It is also possible to registered users to submit their views on the topic 8 . Join the discussion you too!

1. CHAWLA, D.S. How to judge scientists’ strengths. Nature . 2015, volº 527, nº 279. DOI: 10.1038/527279f

2. Ewan Birney: http://twitter.com/ewanbirney

3. Yoav Gilad: http://twitter.com/Y_Gilad

4. Stephen Curry: http://twitter.com/Stephen_Curry

5. Richard Sever: http://twitter.com/cshperspectives

6. Hugo Hilton: http://twitter.com/Hilton_HG

7. SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Declaration recommends eliminate the use of Impact factor for research evaluation . SciELO in Perspective. [viewed 22 November 2015]. Available from: http://blog.scielo.org/en/2013/07/16/declaration-recommends-eliminate-the-use-of-impact-factor-for-research-evaluation/

8. < http://www.nature.com/foxtrot/svc/login?type=commenting >

CHAWLA, D.S. How to judge scientists’ strengths. Nature . 2015, volº 527, nº 279. DOI: 10.1038/527279f

MALHOTRA, V. and MARDER, E. Peer review: The pleasure of publishing – originally published in the journal eLife in January/2015 . SciELO in Perspective. [viewed 21 November 2015]. Available from: http://blog.scielo.org/en/2015/05/11/peer-review-the-pleasure-of-publishing-originally-published-in-the-journal-elife-in-january2015/

SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Could grant proposal reviews be made available openly?. SciELO in Perspective. [viewed 21 November 2015]. Available from: http://blog.scielo.org/en/2015/03/20/could-grant-proposal-reviews-be-made-available-openly/

SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Declaration recommends eliminate the use of Impact factor for research evaluation . SciELO in Perspective. [viewed 22 November 2015]. Available from: http://blog.scielo.org/en/2013/07/16/declaration-recommends-eliminate-the-use-of-impact-factor-for-research-evaluation/

SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Paper proposes four pillars for scholarly communication to favor the speed and the quality of science . SciELO in Perspective. [viewed 21 November 2015]. Available from: http://blog.scielo.org/en/2013/07/31/paper-proposes-four-pillars-for-scholarly-communication-to-favor-the-speed-and-the-quality-of-science/

SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Peer-review as a research topic in its own right . SciELO in Perspective. [viewed 21 November 2015]. Available from: http://blog.scielo.org/en/2015/04/24/peer-review-as-a-research-topic-in-its-own-right/

SCIENTIFIC ELECTRONIC LIBRARY ONLINE. Scientometrics of peer-reviewers – will they be finally recognized? . SciELO in Perspective. [viewed 21 November 2015]. Available from: http://blog.scielo.org/en/2014/05/14/scientometrics-of-peer-reviewers-will-they-be-finally-recognized/

External links

bioRxiv – < http://biorxiv.org/ >

San Francisco Declaration on Research Assessment – < http://am.ascb.org/dora/ >

About Lilian Nassi-Calò

Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.

Translated from the original in portuguese  by Lilian Nassi-Calò.

Related Posts:

Watercolor of Alan Turing generated by Midjourney AI

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Post Navigation

X

SciELO Network

Press Releases

  • Active surveillance is cheaper than immediate surgery in low-risk papillary thyroid microcarcinoma September 13, 2024 Daniela Barros
  • Editorial policies for inclusion and diversity in the Revista Brasileira de Estudos da Presença September 6, 2024 Ana Wegner
  • Metamorphosis, friction or symbiosis between body and animated forms in the work of Ilka Schönbein September 4, 2024 Ana Wegner
  • Performing Arts research highlighted on the blog “SciELO in Perspective” September 2, 2024 Rafaella Uhiara
  • Tick vaccine could be an effective solution for agriculture and livestock August 30, 2024 Ciência Rural

Recent Posts

  • Editorial policies for inclusion and diversity in the Revista Brasileira de Estudos da Presença
  • Art as a Vehicle and Jerzy Grotowski’s Investigations Beyond the 20th Century
  • Metamorphosis, friction or symbiosis between body and animated forms in the work of Ilka Schönbein
  • Presence as a field of research in the performing arts
  • Performing Arts research highlighted on the blog “SciELO in Perspective”

Recent Comments

  • Nota de pesar: Charles Pessanha - anpocs.org.br on Interview and Tribute to Charles Pessanha [Originally published in DADOS’ blog in January/2020]
  • Pesquisa da Pesquisa e Inovação on The use of research metrics is diversified in the Leiden Manifesto
  • Pesquisa da Pesquisa organizará sessão especial em uma das maiores conferências sobre indicadores de CTI do mundo – Pesquisa da Pesquisa e Inovação on The use of research metrics is diversified in the Leiden Manifesto
  • SciELO to Celebrate 25 Years with Series of Events - Public Knowledge Project on SciELO 25 Years: Open Science with IDEIA – Impact, Diversity, Equity, Inclusion, and Accessibility
  • Como aumentar a divulgação da produção científica e das revistas acadêmicas - ABCD - Agência de Bibliotecas e Coleções Digitais on Why XML?

The posts are of the authors responsibility and don't necessarily convey the opinions of the SciELO Program.

SciELO - Scientific Electronic Library Online FAPESP - CAPES - CNPq - BIREME - FapUNIFESP http://www.scielo.org [email protected]

Home News Analysis Interviews Documents Newsletter About

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. background, 4. findings, 5. discussion, 6. conclusion and final remarks, supplementary material, data availability, conflict of interest statement., acknowledgements.

  • < Previous

Evaluation of research proposals by peer review panels: broader panels for broader assessments?

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Rebecca Abma-Schouten, Joey Gijbels, Wendy Reijmerink, Ingeborg Meijer, Evaluation of research proposals by peer review panels: broader panels for broader assessments?, Science and Public Policy , Volume 50, Issue 4, August 2023, Pages 619–632, https://doi.org/10.1093/scipol/scad009

  • Permissions Icon Permissions

Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes, how panel composition and formal assessment criteria affect the arguments used. We observe that more scientific arguments are used than arguments related to societal relevance and expected impact. Also, more diverse panels result in a wider range of arguments, largely for the benefit of arguments related to societal relevance and impact. We discuss how funders can contribute to the quality of peer review by creating a shared conceptual framework that better defines research quality and societal relevance. We also contribute to a further understanding of the role of diverse peer review panels.

Scientific biomedical and health research is often supported by project or programme grants from public funding agencies such as governmental research funders and charities. Research funders primarily rely on peer review, often a combination of independent written review and discussion in a peer review panel, to inform their funding decisions. Peer review panels have the difficult task of integrating and balancing the various assessment criteria to select and rank the eligible proposals. With the increasing emphasis on societal benefit and being responsive to societal needs, the assessment of research proposals ought to include broader assessment criteria, including both scientific quality and societal relevance, and a broader perspective on relevant peers. This results in new practices of including non-scientific peers in review panels ( Del Carmen Calatrava Moreno et al. 2019 ; Den Oudendammer et al. 2019 ; Van den Brink et al. 2016 ). Relevant peers, in the context of biomedical and health research, include, for example, health-care professionals, (healthcare) policymakers, and patients as the (end-)users of research.

Currently, in scientific and grey literature, much attention is paid to what legitimate criteria are and to deficiencies in the peer review process, for example, focusing on the role of chance and the difficulty of assessing interdisciplinary or ‘blue sky’ research ( Langfeldt 2006 ; Roumbanis 2021a ). Our research primarily builds upon the work of Lamont (2009) , Huutoniemi (2012) , and Kolarz et al. (2016) . Their work articulates how the discourse in peer review panels can be understood by giving insight into disciplinary assessment cultures and social dynamics, as well as how panel members define and value concepts such as scientific excellence, interdisciplinarity, and societal impact. At the same time, there is little empirical work on what actually is discussed in peer review meetings and to what extent this is related to the specific objectives of the research funding programme. Such observational work is especially lacking in the biomedical and health domain.

The aim of our exploratory study is to learn what arguments panel members use in a review meeting when assessing research proposals in biomedical and health research programmes. We explore how arguments used in peer review panels are affected by (1) the formal assessment criteria and (2) the inclusion of non-scientific peers in review panels, also called (end-)users of research, societal stakeholders, or societal actors. We add to the existing literature by focusing on the actual arguments used in peer review assessment in practice.

To this end, we observed ten panel meetings in a variety of eight biomedical and health research programmes at two large research funders in the Netherlands: the governmental research funder The Netherlands Organisation for Health Research and Development (ZonMw) and the charitable research funder the Dutch Heart Foundation (DHF). Our first research question focuses on what arguments panel members use when assessing research proposals in a review meeting. The second examines to what extent these arguments correspond with the formal −as described in the programme brochure and assessment form− criteria on scientific quality and societal impact creation. The third question focuses on how arguments used differ between panel members with different perspectives.

2.1 Relation between science and society

To understand the dual focus of scientific quality and societal relevance in research funding, a theoretical understanding and a practical operationalisation of the relation between science and society are needed. The conceptualisation of this relationship affects both who are perceived as relevant peers in the review process and the criteria by which research proposals are assessed.

The relationship between science and society is not constant over time nor static, yet a relation that is much debated. Scientific knowledge can have a huge impact on societies, either intended or unintended. Vice versa, the social environment and structure in which science takes place influence the rate of development, the topics of interest, and the content of science. However, the second part of this inter-relatedness between science and society generally receives less attention ( Merton 1968 ; Weingart 1999 ).

From a historical perspective, scientific and technological progress contributed to the view that science was valuable on its own account and that science and the scientist stood independent of society. While this protected science from unwarranted political influence, societal disengagement with science resulted in less authority by science and debate about its contribution to society. This interdependence and mutual influence contributed to a modern view of science in which knowledge development is valued both on its own merit and for its impact on, and interaction with, society. As such, societal factors and problems are important drivers for scientific research. This warrants that the relation and boundaries between science, society, and politics need to be organised and constantly reinforced and reiterated ( Merton 1968 ; Shapin 2008 ; Weingart 1999 ).

Glerup and Horst (2014) conceptualise the value of science to society and the role of society in science in four rationalities that reflect different justifications for their relation and thus also for who is responsible for (assessing) the societal value of science. The rationalities are arranged along two axes: one is related to the internal or external regulation of science and the other is related to either the process or the outcome of science as the object of steering. The first two rationalities of Reflexivity and Demarcation focus on internal regulation in the scientific community. Reflexivity focuses on the outcome. Central is that science, and thus, scientists should learn from societal problems and provide solutions. Demarcation focuses on the process: science should continuously question its own motives and methods. The latter two rationalities of Contribution and Integration focus on external regulation. The core of the outcome-oriented Contribution rationality is that scientists do not necessarily see themselves as ‘working for the public good’. Science should thus be regulated by society to ensure that outcomes are useful. The central idea of the process-oriented Integration rationality is that societal actors should be involved in science in order to influence the direction of research.

Research funders can be seen as external or societal regulators of science. They can focus on organising the process of science, Integration, or on scientific outcomes that function as solutions for societal challenges, Contribution. In the Contribution perspective, a funder could enhance outside (societal) involvement in science to ensure that scientists take responsibility to deliver results that are needed and used by society. From Integration follows that actors from science and society need to work together in order to produce the best results. In this perspective, there is a lack of integration between science and society and more collaboration and dialogue are needed to develop a new kind of integrative responsibility ( Glerup and Horst 2014 ). This argues for the inclusion of other types of evaluators in research assessment. In reality, these rationalities are not mutually exclusive and also not strictly separated. As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today.

2.2 Criteria for research quality and societal relevance

The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals. Even though the main ingredients are quite similar, as a consequence of the coexisting rationalities in science, societal aspects can be defined and operationalised in different ways ( Alla et al. 2017 ). In the definition of societal impact by Reed, emphasis is placed on the outcome : the contribution to society. It includes the significance for society, the size of potential impact, and the reach , the number of people or organisations benefiting from the expected outcomes ( Reed et al. 2021 ). Other models and definitions focus more on the process of science and its interaction with society. Spaapen and Van Drooge introduced productive interactions in the assessment of societal impact, highlighting a direct contact between researchers and other actors. A key idea is that the interaction in different domains leads to impact in different domains ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Definitions that focus on the process often refer to societal impact as (1) something that can take place in distinguishable societal domains, (2) something that needs to be actively pursued, and (3) something that requires interactions with societal stakeholders (or users of research) ( Hughes and Kitson 2012 ; Spaapen and Van Drooge 2011 ).

Glerup and Horst show that process and outcome-oriented aspects can be combined in the operationalisation of criteria for assessing research proposals on societal aspects. Also, the funders participating in this study include the outcome—the value created in different domains—and the process—productive interactions with stakeholders—in their formal assessment criteria for societal relevance and impact. Different labels are used for these criteria, such as societal relevance , societal quality , and societal impact ( Abma-Schouten 2017 ; Reijmerink and Oortwijn 2017 ). In this paper, we use societal relevance or societal relevance and impact .

Scientific quality in research assessment frequently refers to all aspects and activities in the study that contribute to the validity and reliability of the research results and that contribute to the integrity and quality of the research process itself. The criteria commonly include the relevance of the proposal for the funding programme, the scientific relevance, originality, innovativeness, methodology, and feasibility ( Abdoul et al. 2012 ). Several studies demonstrated that quality is seen as not only a rich concept but also a complex concept in which excellence and innovativeness, methodological aspects, engagement of stakeholders, multidisciplinary collaboration, and societal relevance all play a role ( Geurts 2016 ; Roumbanis 2019 ; Scholten et al. 2018 ). Another study showed a comprehensive definition of ‘good’ science, which includes creativity, reproducibility, perseverance, intellectual courage, and personal integrity. It demonstrated that ‘good’ science involves not only scientific excellence but also personal values and ethics, and engagement with society ( Van den Brink et al. 2016 ). Noticeable in these studies is the connection made between societal relevance and scientific quality.

In summary, the criteria for scientific quality and societal relevance are conceptualised in different ways, and perspectives on the role of societal value creation and the involvement of societal actors vary strongly. Research funders hence have to pay attention to the meaning of the criteria for the panel members they recruit to help them, and navigate and negotiate how the criteria are applied in assessing research proposals. To be able to do so, more insight is needed in which elements of scientific quality and societal relevance are discussed in practice by peer review panels.

2.3 Role of funders and societal actors in peer review

National governments and charities are important funders of biomedical and health research. How this funding is distributed varies per country. Project funding is frequently allocated based on research programming by specialised public funding organisations, such as the Dutch Research Council in the Netherlands and ZonMw for health research. The DHF, the second largest private non-profit research funder in the Netherlands, provides project funding ( Private Non-Profit Financiering 2020 ). Funders, as so-called boundary organisations, can act as key intermediaries between government, science, and society ( Jasanoff 2011 ). Their responsibility is to develop effective research policies connecting societal demands and scientific ‘supply’. This includes setting up and executing fair and balanced assessment procedures ( Sarewitz and Pielke 2007 ). Herein, the role of societal stakeholders is receiving increasing attention ( Benedictus et al. 2016 ; De Rijcke et al. 2016 ; Dijstelbloem et al. 2013 ; Scholten et al. 2018 ).

All charitable health research funders in the Netherlands have, in the last decade, included patients at different stages of the funding process, including in assessing research proposals ( Den Oudendammer et al. 2019 ). To facilitate research funders in involving patients in assessing research proposals, the federation of Dutch patient organisations set up an independent reviewer panel with (at-risk) patients and direct caregivers ( Patiëntenfederatie Nederland, n.d .). Other foundations have set up societal advisory panels including a wider range of societal actors than patients alone. The Committee Societal Quality (CSQ) of the DHF includes, for example, (at-risk) patients and a wide range of cardiovascular health-care professionals who are not active as academic researchers. This model is also applied by the Diabetes Foundation and the Princess Beatrix Muscle Foundation in the Netherlands ( Diabetesfonds, n.d .; Prinses Beatrix Spierfonds, n.d .).

In 2014, the Lancet presented a series of five papers about biomedical and health research known as the ‘increasing value, reducing waste’ series ( Macleod et al. 2014 ). The authors addressed several issues as well as potential solutions that funders can implement. They highlight, among others, the importance of improving the societal relevance of the research questions and including the burden of disease in research assessment in order to increase the value of biomedical and health science for society. A better understanding of and an increasing role of users of research are also part of the described solutions ( Chalmers et al. 2014 ; Van den Brink et al. 2016 ). This is also in line with the recommendations of the 2013 Declaration on Research Assessment (DORA) ( DORA 2013 ). These recommendations influence the way in which research funders operationalise their criteria in research assessment, how they balance the judgement of scientific and societal aspects, and how they involve societal stakeholders in peer review.

2.4 Panel peer review of research proposals

To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding. Peer review is an established component of professional academic practice, is deeply embedded in the research culture, and essentially consists of experts in a given domain appraising the professional performance, creativity, and/or quality of scientific work produced by others in their field of competence ( Demicheli and Di Pietrantonj 2007 ). The history of peer review as the default approach for scientific evaluation and accountability is, however, relatively young. While the term was unheard of in the 1960s, by 1970, it had become the standard. Since that time, peer review has become increasingly diverse and formalised, resulting in more public accountability ( Reinhart and Schendzielorz 2021 ).

While many studies have been conducted concerning peer review in scholarly publishing, peer review in grant allocation processes has been less discussed ( Demicheli and Di Pietrantonj 2007 ). The most extensive work on this topic has been conducted by Lamont (2009) . Lamont studied peer review panels in five American research funding organisations, including observing three panels. Other examples include Roumbanis’s ethnographic observations of ten review panels at the Swedish Research Council in natural and engineering sciences ( Roumbanis 2017 , 2021a ). Also, Huutoniemi was able to study, but not observe, four panels on environmental studies and social sciences of the Academy of Finland ( Huutoniemi 2012 ). Additionally, Van Arensbergen and Van den Besselaar (2012) analysed peer review through interviews and by analysing the scores and outcomes at different stages of the peer review process in a talent funding programme. In particular, interesting is the study by Luo and colleagues on 164 written panel review reports, showing that the reviews from panels that included non-scientific peers described broader and more concrete impact topics. Mixed panels also more often connected research processes and characteristics of applicants with impact creation ( Luo et al. 2021 ).

While these studies primarily focused on peer review panels in other disciplinary domains or are based on interviews or reports instead of direct observations, we believe that many of the findings are relevant to the functioning of panels in the context of biomedical and health research. From this literature, we learn to have realistic expectations of peer review. It is inherently difficult to predict in advance which research projects will provide the most important findings or breakthroughs ( Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , 2021b ). At the same time, these limitations may not substantiate the replacement of peer review by another assessment approach ( Wessely 1998 ). Many topics addressed in the literature are inter-related and relevant to our study, such as disciplinary differences and interdisciplinarity, social dynamics and their consequences for consistency and bias, and suggestions to improve panel peer review ( Lamont and Huutoniemi 2011 ; Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , b ; Wessely 1998 ).

Different scientific disciplines show different preferences and beliefs about how to build knowledge and thus have different perceptions of excellence. However, panellists are willing to respect and acknowledge other standards of excellence ( Lamont 2009 ). Evaluation cultures also differ between scientific fields. Science, technology, engineering, and mathematics panels might, in comparison with panellists from social sciences and humanities, be more concerned with the consistency of the assessment across panels and therefore with clear definitions and uses of assessment criteria ( Lamont and Huutoniemi 2011 ). However, much is still to learn about how panellists’ cognitive affiliations with particular disciplines unfold in the evaluation process. Therefore, the assessment of interdisciplinary research is much more complex than just improving the criteria or procedure because less explicit repertoires would also need to change ( Huutoniemi 2012 ).

Social dynamics play a role as panellists may differ in their motivation to engage in allocation processes, which could create bias ( Lee et al. 2013 ). Placing emphasis on meeting established standards or thoroughness in peer review may promote uncontroversial and safe projects, especially in a situation where strong competition puts pressure on experts to reach a consensus ( Langfeldt 2001 ,2006 ). Personal interest and cognitive similarity may also contribute to conservative bias, which could negatively affect controversial or frontier science ( Luukkonen 2012 ; Roumbanis 2021a ; Travis and Collins 1991 ). Central in this part of literature is that panel conclusions are the outcome of and are influenced by the group interaction ( Van Arensbergen et al. 2014a ). Differences in, for example, the status and expertise of the panel members can play an important role in group dynamics. Insights from social psychology on group dynamics can help in understanding and avoiding bias in peer review panels ( Olbrecht and Bornmann 2010 ). For example, group performance research shows that more diverse groups with complementary skills make better group decisions than homogenous groups. Yet, heterogeneity can also increase conflict within the group ( Forsyth 1999 ). Therefore, it is important to pay attention to power dynamics and maintain team spirit and good communication ( Van Arensbergen et al. 2014a ), especially in meetings that include both scientific and non-scientific peers.

The literature also provides funders with starting points to improve the peer review process. For example, the explicitness of review procedures positively influences the decision-making processes ( Langfeldt 2001 ). Strategic voting and decision-making appear to be less frequent in panels that rate than in panels that rank proposals. Also, an advisory instead of a decisional role may improve the quality of the panel assessment ( Lamont and Huutoniemi 2011 ).

Despite different disciplinary evaluative cultures, formal procedures, and criteria, panel members with different backgrounds develop shared customary rules of deliberation that facilitate agreement and help avoid situations of conflict ( Huutoniemi 2012 ; Lamont 2009 ). This is a necessary prerequisite for opening up peer review panels to include non-academic experts. When doing so, it is important to realise that panel review is a social, emotional, and interactional process. It is therefore important to also take these non-cognitive aspects into account when studying cognitive aspects ( Lamont and Guetzkow 2016 ), as we do in this study.

In summary, what we learn from the literature is that (1) the specific criteria to operationalise scientific quality and societal relevance of research are important, (2) the rationalities from Glerup and Horst predict that not everyone values societal aspects and involve non-scientists in peer review to the same extent and in the same way, (3) this may affect the way peer review panels discuss these aspects, and (4) peer review is a challenging group process that could accommodate other rationalities in order to prevent bias towards specific scientific criteria. To disentangle these aspects, we have carried out an observational study of a diverse range of peer review panel sessions using a fixed set of criteria focusing on scientific quality and societal relevance.

3.1 Research assessment at ZonMw and the DHF

The peer review approach and the criteria used by both the DHF and ZonMw are largely comparable. Funding programmes at both organisations start with a brochure describing the purposes, goals, and conditions for research applications, as well as the assessment procedure and criteria. Both organisations apply a two-stage process. In the first phase, reviewers are asked to write a peer review. In the second phase, a panel reviews the application based on the advice of the written reviews and the applicants’ rebuttal. The panels advise the board on eligible proposals for funding including a ranking of these proposals.

There are also differences between the two organisations. At ZonMw, the criteria for societal relevance and quality are operationalised in the ZonMw Framework Fostering Responsible Research Practices ( Reijmerink and Oortwijn 2017 ). This contributes to a common operationalisation of both quality and societal relevance on the level of individual funding programmes. Important elements in the criteria for societal relevance are, for instance, stakeholder participation, (applying) holistic health concepts, and the added value of knowledge in practice, policy, and education. The framework was developed to optimise the funding process from the perspective of knowledge utilisation and includes concepts like productive interactions and Open Science. It is part of the ZonMw Impact Assessment Framework aimed at guiding the planning, monitoring, and evaluation of funding programmes ( Reijmerink et al. 2020 ). At ZonMw, interdisciplinary panels are set up specifically for each funding programme. Panels are interdisciplinary in nature with academics of a wide range of disciplines and often include non-academic peers, like policymakers, health-care professionals, and patients.

At the DHF, the criteria for scientific quality and societal relevance, at the DHF called societal impact , find their origin in the strategy report of the advisory committee CardioVascular Research Netherlands ( Reneman et al. 2010 ). This report forms the basis of the DHF research policy focusing on scientific and societal impact by creating national collaborations in thematic, interdisciplinary research programmes (the so-called consortia) connecting preclinical and clinical expertise into one concerted effort. An International Scientific Advisory Committee (ISAC) was established to assess these thematic consortia. This panel consists of international scientists, primarily with expertise in the broad cardiovascular research field. The DHF criteria for societal impact were redeveloped in 2013 in collaboration with their CSQ. This panel assesses and advises on the societal aspects of proposed studies. The societal impact criteria include the relevance of the health-care problem, the expected contribution to a solution, attention to the next step in science and towards implementation in practice, and the involvement of and interaction with (end-)users of research (R.Y. Abma-Schouten and I.M. Meijer, unpublished data). Peer review panels for consortium funding are generally composed of members of the ISAC, members of the CSQ, and ad hoc panel members relevant to the specific programme. CSQ members often have a pre-meeting before the final panel meetings to prepare and empower CSQ representatives participating in the peer review panel.

3.2 Selection of funding programmes

To compare and evaluate observations between the two organisations, we selected funding programmes that were relatively comparable in scope and aims. The criteria were (1) a translational and/or clinical objective and (2) the selection procedure consisted of review panels that were responsible for the (final) relevance and quality assessment of grant applications. In total, we selected eight programmes: four at each organisation. At the DHF, two programmes were chosen in which the CSQ did not participate to better disentangle the role of the panel composition. For each programme, we observed the selection process varying from one session on one day (taking 2–8 h) to multiple sessions over several days. Ten sessions were observed in total, of which eight were final peer review panel meetings and two were CSQ meetings preparing for the panel meeting.

After management approval for the study in both organisations, we asked programme managers and panel chairpersons of the programmes that were selected for their consent for observation; none refused participation. Panel members were, in a passive consent procedure, informed about the planned observation and anonymous analyses.

To ensure the independence of this evaluation, the selection of the grant programmes, and peer review panels observed, was at the discretion of the project team of this study. The observations and supervision of the analyses were performed by the senior author not affiliated with the funders.

3.3 Observation matrix

Given the lack of a common operationalisation for scientific quality and societal relevance, we decided to use an observation matrix with a fixed set of detailed aspects as a gold standard to score the brochures, the assessment forms, and the arguments used in panel meetings. The matrix used for the observations of the review panels was based upon and adapted from a ‘grant committee observation matrix’ developed by Van Arensbergen. The original matrix informed a literature review on the selection of talent through peer review and the social dynamics in grant review committees ( van Arensbergen et al. 2014b ). The matrix includes four categories of aspects that operationalise societal relevance, scientific quality, committee, and applicant (see  Table 1 ). The aspects of scientific quality and societal relevance were adapted to fit the operationalisation of scientific quality and societal relevance of the organisations involved. The aspects concerning societal relevance were derived from the CSQ criteria, and the aspects concerning scientific quality were based on the scientific criteria of the first panel observed. The four argument types related to the panel were kept as they were. This committee-related category reflects statements that are related to the personal experience or preference of a panel member and can be seen as signals for bias. This category also includes statements that compare a project with another project without further substantiation. The three applicant-related arguments in the original observation matrix were extended with a fourth on social skills in communication with society. We added health technology assessment (HTA) because one programme specifically focused on this aspect. We tested our version of the observation matrix in pilot observations.

Aspects included in the observation matrix and examples of arguments.

Short title of aspects in the observation matrixExamples of arguments
Criterion: scientific quality
Fit in programme objectives‘This disease is underdiagnosed, and undertreated, and therefore fits the criteria of this call very well.’
‘Might have a relevant impact on patient care, but to what extent does it align with the aims of this programme.’
Match science and health-care problem‘It is not properly compared to the current situation (standard of care).’
‘Super relevant application with a fitting plan, perhaps a little too mechanistic.’
International competitiveness‘Something is done all over the world, but they do many more evaluations, however.’
Feasibility of the aims‘… because this is a discovery study the power calculation is difficult, but I would recommend to increase the sample size.’
‘It’s very risky, because this is an exploratory … study without hypotheses.’
‘The aim is to improve …, but there is no control to compare with.’
‘Well substantiated that they are able to achieve the objectives.’
Plan of work‘Will there be enough cases in this cohort?’
‘The budget is no longer correct.’
‘Plan is good, but … doubts about the approach, because too little information….’
Criterion: societal relevance
Health-care problem‘Relevant problem for a small group.’
‘… but is this a serious health condition?’
‘Prevalence is low, but patients do die, morbidity is very high.’
Contribution to solution‘What will this add since we already do…?’
‘It is unclear what the intervention will be after the diagnosis.’
‘Relevance is not good. Side effects are not known and neither is effectiveness.’
Next step in science‘What is needed to go from this retrospective study towards implementation?’
‘It’s not clear whether that work package is necessary or “nice to have”.’
‘Knowledge utilisation paragraph is standard, as used by copywriters.’
Activities towards partners‘What do the applicants do to change the current practice?’
‘Important that the company also contributes financially to the further development.’
‘This proposal includes a good communication plan.’
Participation/diversity‘A user committee is described, but it isn’t well thought through: what is their role?’
‘It’s also important to invite relatives of patients to participate.’
‘They thought really well what their patient group can contribute to the study plan.’
Applicant-related aspects
Scientific publication applicant‘One project leader only has one original paper, …, focus more on other diseases.’
‘Publication output not excellent. Conference papers and posters of local meetings, CV not so strong.’
Background applicant‘… not enough with this expertise involved in the leadership.’
‘Very good CV, … has won many awards.’
‘Candidate is excellent, top 10 to 20 in this field….’
Reputation applicant‘… the main applicant is a hotshot in this field.’
‘Candidate leads cohorts as …, gets a no.’
Societal skills‘Impressed that they took my question seriously, that made my day.’
‘They were very honest about overoptimism in the proposal.’
‘Good group, but they seem quite aware of their own brilliance.’
HTA
HTA‘Concrete revenues are negative, however improvement in quality-adjusted life years but very shaky.’
Committee-related aspects
Personal experience with the applicant‘This researcher only wants to acquire knowledge, nothing further.’
‘I reviewed him before and he is not very good at interviews.’
Personal/unasserted preference‘Excellent presentation, much better than the application.’ (Without further elaboration)
‘This academic lab has advantages, but also disadvantages with regard to independence.’
‘If it can be done anywhere, it is in this group.’
Relation with applicants’ institute/network‘May come up with new models, they’re linked with a group in … who can do this very well.’
Comparison with other applications‘What is the relevance compared to the other proposal? They do something similar.’
‘Look at the proposals as a whole, portfolio, we have clinical and we have fundamental.’
Short title of aspects in the observation matrixExamples of arguments
Criterion: scientific quality
Fit in programme objectives‘This disease is underdiagnosed, and undertreated, and therefore fits the criteria of this call very well.’
‘Might have a relevant impact on patient care, but to what extent does it align with the aims of this programme.’
Match science and health-care problem‘It is not properly compared to the current situation (standard of care).’
‘Super relevant application with a fitting plan, perhaps a little too mechanistic.’
International competitiveness‘Something is done all over the world, but they do many more evaluations, however.’
Feasibility of the aims‘… because this is a discovery study the power calculation is difficult, but I would recommend to increase the sample size.’
‘It’s very risky, because this is an exploratory … study without hypotheses.’
‘The aim is to improve …, but there is no control to compare with.’
‘Well substantiated that they are able to achieve the objectives.’
Plan of work‘Will there be enough cases in this cohort?’
‘The budget is no longer correct.’
‘Plan is good, but … doubts about the approach, because too little information….’
Criterion: societal relevance
Health-care problem‘Relevant problem for a small group.’
‘… but is this a serious health condition?’
‘Prevalence is low, but patients do die, morbidity is very high.’
Contribution to solution‘What will this add since we already do…?’
‘It is unclear what the intervention will be after the diagnosis.’
‘Relevance is not good. Side effects are not known and neither is effectiveness.’
Next step in science‘What is needed to go from this retrospective study towards implementation?’
‘It’s not clear whether that work package is necessary or “nice to have”.’
‘Knowledge utilisation paragraph is standard, as used by copywriters.’
Activities towards partners‘What do the applicants do to change the current practice?’
‘Important that the company also contributes financially to the further development.’
‘This proposal includes a good communication plan.’
Participation/diversity‘A user committee is described, but it isn’t well thought through: what is their role?’
‘It’s also important to invite relatives of patients to participate.’
‘They thought really well what their patient group can contribute to the study plan.’
Applicant-related aspects
Scientific publication applicant‘One project leader only has one original paper, …, focus more on other diseases.’
‘Publication output not excellent. Conference papers and posters of local meetings, CV not so strong.’
Background applicant‘… not enough with this expertise involved in the leadership.’
‘Very good CV, … has won many awards.’
‘Candidate is excellent, top 10 to 20 in this field….’
Reputation applicant‘… the main applicant is a hotshot in this field.’
‘Candidate leads cohorts as …, gets a no.’
Societal skills‘Impressed that they took my question seriously, that made my day.’
‘They were very honest about overoptimism in the proposal.’
‘Good group, but they seem quite aware of their own brilliance.’
HTA
HTA‘Concrete revenues are negative, however improvement in quality-adjusted life years but very shaky.’
Committee-related aspects
Personal experience with the applicant‘This researcher only wants to acquire knowledge, nothing further.’
‘I reviewed him before and he is not very good at interviews.’
Personal/unasserted preference‘Excellent presentation, much better than the application.’ (Without further elaboration)
‘This academic lab has advantages, but also disadvantages with regard to independence.’
‘If it can be done anywhere, it is in this group.’
Relation with applicants’ institute/network‘May come up with new models, they’re linked with a group in … who can do this very well.’
Comparison with other applications‘What is the relevance compared to the other proposal? They do something similar.’
‘Look at the proposals as a whole, portfolio, we have clinical and we have fundamental.’

3.4 Observations

Data were primarily collected through observations. Our observations of review panel meetings were non-participatory: the observer and goal of the observation were introduced at the start of the meeting, without further interactions during the meeting. To aid in the processing of observations, some meetings were audiotaped (sound only). Presentations or responses of applicants were not noted and were not part of the analysis. The observer made notes on the ongoing discussion and scored the arguments while listening. One meeting was not attended in person and only observed and scored by listening to the audiotape recording. Because this made identification of the panel members unreliable, this panel meeting was excluded from the analysis of the third research question on how arguments used differ between panel members with different perspectives.

3.5 Grant programmes and the assessment criteria

We gathered and analysed all brochures and assessment forms used by the review panels in order to answer our second research question on the correspondence of arguments used with the formal criteria. Several programmes consisted of multiple grant calls: in that case, the specific call brochure was gathered and analysed, not the overall programme brochure. Additional documentation (e.g. instructional presentations at the start of the panel meeting) was not included in the document analysis. All included documents were marked using the aforementioned observation matrix. The panel-related arguments were not used because this category reflects the personal arguments of panel members that are not part of brochures or instructions. To avoid potential differences in scoring methods, two of the authors independently scored half of the documents that were checked and validated afterwards by the other. Differences were discussed until a consensus was reached.

3.6 Panel composition

In order to answer the third research question, background information on panel members was collected. We categorised the panel members into five common types of panel members: scientific, clinical scientific, health-care professional/clinical, patient, and policy. First, a list of all panel members was composed including their scientific and professional backgrounds and affiliations. The theoretical notion that reviewers represent different types of users of research and therefore potential impact domains (academic, social, economic, and cultural) was leading in the categorisation ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Because clinical researchers play a dual role in both advancing research as a fellow academic and as a user of the research output in health-care practice, we divided the academic members into two categories of non-clinical and clinical researchers. Multiple types of professional actors participated in each review panel. These were divided into two groups for the analysis: health-care professionals (without current academic activity) and policymakers in the health-care sector. No representatives of the private sector participated in the observed review panels. From the public domain, (at-risk) patients and patient representatives were part of several review panels. Only publicly available information was used to classify the panel members. Members were assigned to one category only: categorisation took place based on the specific role and expertise for which they were appointed to the panel.

In two of the four DHF programmes, the assessment procedure included the CSQ. In these two programmes, representatives of this CSQ participated in the scientific panel to articulate the findings of the CSQ meeting during the final assessment meeting. Two grant programmes were assessed by a review panel with solely (clinical) scientific members.

3.7 Analysis

Data were processed using ATLAS.ti 8 and Microsoft Excel 2010 to produce descriptive statistics. All observed arguments were coded and given a randomised identification code for the panel member using that particular argument. The number of times an argument type was observed was used as an indicator for the relative importance of that argument in the appraisal of proposals. With this approach, a practical and reproducible method for research funders to evaluate the effect of policy changes on peer review was developed. If codes or notes were unclear, post-observation validation of codes was carried out based on observation matrix notes. Arguments that were noted by the observer but could not be matched with an existing code were first coded as a ‘non-existing’ code, and these were resolved by listening back to the audiotapes. Arguments that could not be assigned to a panel member were assigned a ‘missing panel member’ code. A total of 4.7 per cent of all codes were assigned a ‘missing panel member’ code.

After the analyses, two meetings were held to reflect on the results: one with the CSQ and the other with the programme coordinators of both organisations. The goal of these meetings was to improve our interpretation of the findings, disseminate the results derived from this project, and identify topics for further analyses or future studies.

3.8 Limitations

Our study focuses on studying the final phase of the peer review process of research applications in a real-life setting. Our design, a non-participant observation of peer review panels, also introduced several challenges ( Liu and Maitlis 2010 ).

First, the independent review phase or pre-application phase was not part of our study. We therefore could not assess to what extent attention to certain aspects of scientific quality or societal relevance and impact in the review phase influenced the topics discussed during the meeting.

Second, the most important challenge of overt non-participant observations is the observer effect: the danger of causing reactivity in those under study. We believe that the consequences of this effect on our conclusions were limited because panellists are used to external observers in the meetings of these two funders. The observer briefly explained the goal of the study during the introductory round of the panel in general terms. The observer sat as unobtrusively as possible and avoided reactivity to discussions. Similar to previous observations of panels, we experienced that the fact that an observer was present faded into the background during a meeting ( Roumbanis 2021a ). However, a limited observer effect can never be entirely excluded.

Third, our design to only score the arguments raised, and not the responses of the applicant, or information on the content of the proposals, has its positives and negatives. With this approach, we could assure the anonymity of the grant procedures reviewed, the applicants and proposals, panels, and individual panellists. This was an important condition for the funders involved. We took the frequency arguments used as a proxy for the relative importance of that argument in decision-making, which undeniably also has its caveats. Our data collection approach limits more in-depth reflection on which arguments were decisive in decision-making and on group dynamics during the interaction with the applicants as non-verbal and non-content-related comments were not captured in this study.

Fourth, despite this being one of the largest observational studies on the peer review assessment of grant applications with the observation of ten panels in eight grant programmes, many variables might explain differences in arguments used within and beyond our view. Examples of ‘confounding’ variables are the many variations in panel composition, the differences in objectives of the programmes, and the range of the funding programmes. Our study should therefore be seen as exploratory and thus warrants caution in drawing conclusions.

4.1 Overview of observational data

The grant programmes included in this study reflected a broad range of biomedical and health funding programmes, ranging from fellowship grants to translational research and applied health research. All formal documents available to the applicants and to the review panel were retrieved for both ZonMw and the DHF. In total, eighteen documents corresponding to the eight grant programmes were studied. The number of proposals assessed per programme varied from three to thirty-three. The duration of the panel meetings varied between 2 h and two consecutive days. Together, this resulted in a large spread in the number of total arguments used in an individual meeting and in a grant programme as a whole. In the shortest meeting, 49 arguments were observed versus 254 in the longest, with a mean of 126 arguments per meeting and on average 15 arguments per proposal.

We found consistency between how criteria were operationalised in the grant programme’s brochures and in the assessment forms of the review panels overall. At the same time, because the number of elements included in the observation matrix is limited, there was a considerable diversity in the arguments that fall within each aspect (see examples in  Table 1 ). Some of these differences could possibly be explained by differences in language used and the level of detail in the observation matrix, the brochure, and the panel’s instructions. This was especially the case in the applicant-related aspects in which the observation matrix was more detailed than the text in the brochure and assessment forms.

In interpretating our findings, it is important to take into account that, even though our data were largely complete and the observation matrix matched well with the description of the criteria in the brochures and assessment forms, there was a large diversity in the type and number of arguments used and in the number of proposals assessed in the grant programmes included in our study.

4.2 Wide range of arguments used by panels: scientific arguments used most

For our first research question, we explored the number and type of arguments used in the panel meetings. Figure 1 provides an overview of the arguments used. Scientific quality was discussed most. The number of times the feasibility of the aims was discussed clearly stands out in comparison to all other arguments. Also, the match between the science and the problem studied and the plan of work were frequently discussed aspects of scientific quality. International competitiveness of the proposal was discussed the least of all five scientific arguments.

The number of arguments used in panel meetings.

The number of arguments used in panel meetings.

Attention was paid to societal relevance and impact in the panel meetings of both organisations. Yet, the language used differed somewhat between organisations. The contribution to a solution and the next step in science were the most often used societal arguments. At ZonMw, the impact of the health-care problem studied and the activities towards partners were less frequently discussed than the other three societal arguments. At the DHF, the five societal arguments were used equally often.

With the exception of the fellowship programme meeting, applicant-related arguments were not often used. The fellowship panel used arguments related to the applicant and to scientific quality about equally often. Committee-related arguments were also rarely used in the majority of the eight grant programmes observed. In three out of the ten panel meetings, one or two arguments were observed, which were related to personal experience with the applicant or their direct network. In seven out of ten meetings, statements were observed, which were unasserted or were explicitly announced as reflecting a personal preference. The frequency varied between one and seven statements (sixteen in total), which is low in comparison to the other arguments used (see  Fig. 1 for examples).

4.3 Use of arguments varied strongly per panel meeting

The balance in the use of scientific and societal arguments varied strongly per grant programme, panel, and organisation. At ZonMw, two meetings had approximately an equal balance in societal and scientific arguments. In the other two meetings, scientific arguments were used twice to four times as often as societal arguments. At the DHF, three types of panels were observed. Different patterns in the relative use of societal and scientific arguments were observed for each of these panel types. In the two CSQ-only meetings the societal arguments were used approximately twice as often as scientific arguments. In the two meetings of the scientific panels, societal arguments were infrequently used (between zero and four times per argument category). In the combined societal and scientific panel meetings, the use of societal and scientific arguments was more balanced.

4.4 Match of arguments used by panels with the assessment criteria

In order to answer our second research question, we looked into the relation of the arguments used with the formal criteria. We observed that a broader range of arguments were often used in comparison to how the criteria were described in the brochure and assessment instruction. However, arguments related to aspects that were consequently included in the brochure and instruction seemed to be discussed more frequently than in programmes where those aspects were not consistently included or were not included at all. Although the match of the science with the health-care problem and the background and reputation of the applicant were not always made explicit in the brochure or instructions, they were discussed in many panel meetings. Supplementary Fig. S1 provides a visualisation of how arguments used differ between the programmes in which those aspects were, were not, consistently included in the brochure and instruction forms.

4.5 Two-thirds of the assessment was driven by scientific panel members

To answer our third question, we looked into the differences in arguments used between panel members representing a scientific, clinical scientific, professional, policy, or patient perspective. In each research programme, the majority of panellists had a scientific background ( n  = 35), thirty-four members had a clinical scientific background, twenty had a health professional/clinical background, eight members represented a policy perspective, and fifteen represented a patient perspective. From the total number of arguments (1,097), two-thirds were made by members with a scientific or clinical scientific perspective. Members with a scientific background engaged most actively in the discussion with a mean of twelve arguments per member. Similarly, clinical scientists and health-care professionals participated with a mean of nine arguments, and members with a policy and patient perspective put forward the least number of arguments on average, namely, seven and eight. Figure 2 provides a complete overview of the total and mean number of arguments used by the different disciplines in the various panels.

The total and mean number of arguments displayed per subgroup of panel members.

The total and mean number of arguments displayed per subgroup of panel members.

4.6 Diverse use of arguments by panellists, but background matters

In meetings of both organisations, we observed a diverse use of arguments by the panel members. Yet, the use of arguments varied depending on the background of the panel member (see  Fig. 3 ). Those with a scientific and clinical scientific perspective used primarily scientific arguments. As could be expected, health-care professionals and patients used societal arguments more often.

The use of arguments differentiated by panel member background.

The use of arguments differentiated by panel member background.

Further breakdown of arguments across backgrounds showed clear differences in the use of scientific arguments between the different disciplines of panellists. Scientists and clinical scientists discussed the feasibility of the aims more than twice as often as their second most often uttered element of scientific quality, which was the match between the science and the problem studied . Patients and members with a policy or health professional background put forward fewer but more varied scientific arguments.

Patients and health-care professionals accounted for approximately half of the societal arguments used, despite being a much smaller part of the panel’s overall composition. In other words, members with a scientific perspective were less likely to use societal arguments. The relevance of the health-care problem studied, activities towards partners , and arguments related to participation and diversity were not used often by this group. Patients often used arguments related to patient participation and diversity and activities towards partners , although the frequency of the use of the latter differed per organisation.

The majority of the applicant-related arguments were put forward by scientists, including clinical scientists. Committee-related arguments were very rare and are therefore not differentiated by panel member background, except comments related to a comparison with other applications. These arguments were mainly put forward by panel members with a scientific background. HTA -related arguments were often used by panel members with a scientific perspective. Panel members with other perspectives used this argument scarcely (see Supplementary Figs S2–S4 for the visual presentation of the differences between panel members on all aspects included in the matrix).

5.1 Explanations for arguments used in panels

Our observations show that most arguments for scientific quality were often used. However, except for the feasibility , the frequency of arguments used varied strongly between the meetings and between the individual proposals that were discussed. The fact that most arguments were not consistently used is not surprising given the results from previous studies that showed heterogeneity in grant application assessments and low consistency in comments and scores by independent reviewers ( Abdoul et al. 2012 ; Pier et al. 2018 ). In an analysis of written assessments on nine observed dimensions, no dimension was used in more than 45 per cent of the reviews ( Hartmann and Neidhardt 1990 ).

There are several possible explanations for this heterogeneity. Roumbanis (2021a) described how being responsive to the different challenges in the proposals and to the points of attention arising from the written assessments influenced discussion in panels. Also when a disagreement arises, more time is spent on discussion ( Roumbanis 2021a ). One could infer that unambiguous, and thus not debated, aspects might remain largely undetected in our study. We believe, however, that the main points relevant to the assessment will not remain entirely unmentioned, because most panels in our study started the discussion with a short summary of the proposal, the written assessment, and the rebuttal. Lamont (2009) , however, points out that opening statements serve more goals than merely decision-making. They can also increase the credibility of the panellist, showing their comprehension and balanced assessment of an application. We can therefore not entirely disentangle whether the arguments observed most were also found to be most important or decisive or those were simply the topics that led to most disagreement.

An interesting difference with Roumbanis’ study was the available discussion time per proposal. In our study, most panels handled a limited number of proposals, allowing for longer discussions in comparison with the often 2-min time frame that Roumbanis (2021b) described, potentially contributing to a wider range of arguments being discussed. Limited time per proposal might also limit the number of panellists contributing to the discussion per proposal ( De Bont 2014 ).

5.2 Reducing heterogeneity by improving operationalisation and the consequent use of assessment criteria

We found that the language used for the operationalisation of the assessment criteria in programme brochures and in the observation matrix was much more detailed than in the instruction for the panel, which was often very concise. The exercise also illustrated that many terms were used interchangeably.

This was especially true for the applicant-related aspects. Several panels discussed how talent should be assessed. This confusion is understandable when considering the changing values in research and its assessment ( Moher et al. 2018 ) and the fact that the instruction of the funders was very concise. For example, it was not explicated whether the individual or the team should be assessed. Arensbergen et al. (2014b) described how in grant allocation processes, talent is generally assessed using limited characteristics. More objective and quantifiable outputs often prevailed at the expense of recognising and rewarding a broad variety of skills and traits combining professional, social, and individual capital ( DORA 2013 ).

In addition, committee-related arguments, like personal experiences with the applicant or their institute, were rarely used in our study. Comparisons between proposals were sometimes made without further argumentation, mainly by scientific panel members. This was especially pronounced in one (fellowship) grant programme with a high number of proposals. In this programme, the panel meeting concentrated on quickly comparing the quality of the applicants and of the proposals based on the reviewer’s judgement, instead of a more in-depth discussion of the different aspects of the proposals. Because the review phase was not part of this study, the question of which aspects have been used for the assessment of the proposals in this panel therefore remains partially unanswered. However, weighing and comparing proposals on different aspects and with different inputs is a core element of scientific peer review, both in the review of papers and in the review of grants ( Hirschauer 2010 ). The large role of scientific panel members in comparing proposals is therefore not surprising.

One could anticipate that more consequent language in the operationalising criteria may lead to more clarity for both applicants and panellists and to more consistency in the assessment of research proposals. The trend in our observations was that arguments were used less when the related criteria were not or were consequently included in the brochure and panel instruction. It remains, however, challenging to disentangle the influence of the formal definitions of criteria on the arguments used. Previous studies also encountered difficulties in studying the role of the formal instruction in peer review but concluded that this role is relatively limited ( Langfeldt 2001 ; Reinhart 2010 ).

The lack of a clear operationalisation of criteria can contribute to heterogeneity in peer review as many scholars found that assessors differ in the conceptualisation of good science and to the importance they attach to various aspects of research quality and societal relevance ( Abdoul et al. 2012 ; Geurts 2016 ; Scholten et al. 2018 ; Van den Brink et al. 2016 ). The large variation and absence of a gold standard in the interpretation of scientific quality and societal relevance affect the consistency of peer review. As a consequence, it is challenging to systematically evaluate and improve peer review in order to fund the research that contributes most to science and society. To contribute to responsible research and innovation, it is, therefore, important that funders invest in a more consistent and conscientious peer review process ( Curry et al. 2020 ; DORA 2013 ).

A common conceptualisation of scientific quality and societal relevance and impact could improve the alignment between views on good scientific conduct, programmes’ objectives, and the peer review in practice. Such a conceptualisation could contribute to more transparency and quality in the assessment of research. By involving panel members from all relevant backgrounds, including the research community, health-care professionals, and societal actors, in a better operationalisation of criteria, more inclusive views of good science can be implemented more systematically in the peer review assessment of research proposals. The ZonMw Framework Fostering Responsible Research Practices is an example of an initiative aiming to support standardisation and integration ( Reijmerink et al. 2020 ).

Given the lack of a common definition or conceptualisation of scientific quality and societal relevance, our study made an important decision by choosing to use a fixed set of detailed aspects of two important criteria as a gold standard to score the brochures, the panel instructions, and the arguments used by the panels. This approach proved helpful in disentangling the different components of scientific quality and societal relevance. Having said that, it is important not to oversimplify the causes for heterogeneity in peer review because these substantive arguments are not independent of non-cognitive, emotional, or social aspects ( Lamont and Guetzkow 2016 ; Reinhart 2010 ).

5.3 Do more diverse panels contribute to a broader use of arguments?

Both funders participating in our study have an outspoken public mission that requests sufficient attention to societal aspects in assessment processes. In reality, as observed in several panels, the main focus of peer review meetings is on scientific arguments. Next to the possible explanations earlier, the composition of the panel might play a role in explaining arguments used in panel meetings. Our results have shown that health-care professionals and patients bring in more societal arguments than scientists, including those who are also clinicians. It is, however, not that simple. In the more diverse panels, panel members, regardless of their backgrounds, used more societal arguments than in the less diverse panels.

Observing ten panel meetings was sufficient to explore differences in arguments used by panel members with different backgrounds. The pattern of (primarily) scientific arguments being raised by panels with mainly scientific members is not surprising. After all, it is their main task to assess the scientific content of grant proposals and fit their competencies. As such, one could argue, depending on how one justifies the relationship between science and society, that health-care professionals and patients might be better suited to assess the value for potential users of research results. Scientific panel members and clinical scientists in our study used less arguments that reflect on opening up and connecting science directly to others who can bring it further (being industry, health-care professionals, or other stakeholders). Patients filled this gap since these two types of arguments were the most prevalent type put forward by them. Making an active connection with society apparently needs a broader, more diverse panel for scientists to direct their attention to more societal arguments. Evident from our observations is that in panels with patients and health-care professionals, their presence seemed to increase the attention placed on arguments beyond the scientific arguments put forward by all panel members, including scientists. This conclusion is congruent with the observation that there was a more equal balance in the use of societal and scientific arguments in the scientific panels in which the CSQ participated. This illustrates that opening up peer review panels to non-scientific members creates an opportunity to focus on both the contribution and the integrative rationality ( Glerup and Horst 2014 ) or, in other words, to allow productive interactions between scientific and non-scientific actors. This corresponds with previous research that suggests that with regard to societal aspects, reviews from mixed panels were broader and richer ( Luo et al. 2021 ). In panels with non-scientific experts, more emphasis was placed on the role of the proposed research process to increase the likelihood of societal impact over the causal importance of scientific excellence for broader impacts. This is in line with the findings that panels with more disciplinary diversity, in range and also by including generalist experts, applied more versatile styles to reach consensus and paid more attention to relevance and pragmatic value ( Huutoniemi 2012 ).

Our observations further illustrate that patients and health-care professionals were less vocal in panels than (clinical) scientists and were in the minority. This could reflect their social role and lower perceived authority in the panel. Several guides are available for funders to stimulate the equal participation of patients in science. These guides are also applicable to their involvement in peer review panels. Measures to be taken include the support and training to help prepare patients for their participation in deliberations with renowned scientists and explicitly addressing power differences ( De Wit et al. 2016 ). Panel chairs and programme officers have to set and supervise the conditions for the functioning of both the individual panel members and the panel as a whole ( Lamont 2009 ).

5.4 Suggestions for future studies

In future studies, it is important to further disentangle the role of the operationalisation and appraisal of assessment criteria in reducing heterogeneity in the arguments used by panels. More controlled experimental settings are a valuable addition to the current mainly observational methodologies applied to disentangle some of the cognitive and social factors that influence the functioning and argumentation of peer review panels. Reusing data from the panel observations and the data on the written reports could also provide a starting point for a bottom-up approach to create a more consistent and shared conceptualisation and operationalisation of assessment criteria.

To further understand the effects of opening up review panels to non-scientific peers, it is valuable to compare the role of diversity and interdisciplinarity in solely scientific panels versus panels that also include non-scientific experts.

In future studies, differences between domains and types of research should also be addressed. We hypothesise that biomedical and health research is perhaps more suited for the inclusion of non-scientific peers in panels than other research domains. For example, it is valuable to better understand how potentially relevant users can be well enough identified in other research fields and to what extent non-academics can contribute to assessing the possible value of, especially early or blue sky, research.

The goal of our study was to explore in practice which arguments regarding the main criteria of scientific quality and societal relevance were used by peer review panels of biomedical and health research funding programmes. We showed that there is a wide diversity in the number and range of arguments used, but three main scientific aspects were discussed most frequently. These are the following: is it a feasible approach; does the science match the problem , and is the work plan scientifically sound? Nevertheless, these scientific aspects were accompanied by a significant amount of discussion of societal aspects, of which the contribution to a solution is the most prominent. In comparison with scientific panellists, non-scientific panellists, such as health-care professionals, policymakers, and patients, often use a wider range of arguments and other societal arguments. Even more striking was that, even though non-scientific peers were often outnumbered and less vocal in panels, scientists also used a wider range of arguments when non-scientific peers were present.

It is relevant that two health research funders collaborated in the current study to reflect on and improve peer review in research funding. There are few studies published that describe live observations of peer review panel meetings. Many studies focus on alternatives for peer review or reflect on the outcomes of the peer review process, instead of reflecting on the practice and improvement of peer review assessment of grant proposals. Privacy and confidentiality concerns of funders also contribute to the lack of information on the functioning of peer review panels. In this study, both organisations were willing to participate because of their interest in research funding policies in relation to enhancing the societal value and impact of science. The study provided them with practical suggestions, for example, on how to improve the alignment in language used in programme brochures and instructions of review panels, and contributed to valuable knowledge exchanges between organisations. We hope that this publication stimulates more research funders to evaluate their peer review approach in research funding and share their insights.

For a long time, research funders relied solely on scientists for designing and executing peer review of research proposals, thereby delegating responsibility for the process. Although review panels have a discretionary authority, it is important that funders set and supervise the process and the conditions. We argue that one of these conditions should be the diversification of peer review panels and opening up panels for non-scientific peers.

Supplementary material is available at Science and Public Policy online.

Details of the data and information on how to request access is available from the first author.

Joey Gijbels and Wendy Reijmerink are employed by ZonMw. Rebecca Abma-Schouten is employed by the Dutch Heart Foundation and as external PhD candidate affiliated with the Centre for Science and Technology Studies, Leiden University.

A special thanks to the panel chairs and programme officers of ZonMw and the DHF for their willingness to participate in this project. We thank Diny Stekelenburg, an internship student at ZonMw, for her contributions to the project. Our sincerest gratitude to Prof. Paul Wouters, Sarah Coombs, and Michiel van der Vaart for proofreading and their valuable feedback. Finally, we thank the editors and anonymous reviewers of Science and Public Policy for their thorough and insightful reviews and recommendations. Their contributions are recognisable in the final version of this paper.

Abdoul   H. , Perrey   C. , Amiel   P. , et al.  ( 2012 ) ‘ Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices ’, PLoS One , 7 : 1 – 15 .

Google Scholar

Abma-Schouten   R. Y. ( 2017 ) ‘ Maatschappelijke Kwaliteit van Onderzoeksvoorstellen ’, Dutch Heart Foundation .

Alla   K. , Hall   W. D. , Whiteford   H. A. , et al.  ( 2017 ) ‘ How Do We Define the Policy Impact of Public Health Research? A Systematic Review ’, Health Research Policy and Systems , 15 : 84.

Benedictus   R. , Miedema   F. , and Ferguson   M. W. J. ( 2016 ) ‘ Fewer Numbers, Better Science ’, Nature , 538 : 453 – 4 .

Chalmers   I. , Bracken   M. B. , Djulbegovic   B. , et al.  ( 2014 ) ‘ How to Increase Value and Reduce Waste When Research Priorities Are Set ’, The Lancet , 383 : 156 – 65 .

Curry   S. , De Rijcke   S. , Hatch   A. , et al.  ( 2020 ) ‘ The Changing Role of Funders in Responsible Research Assessment: Progress, Obstacles and the Way Ahead ’, RoRI Working Paper No. 3, London : Research on Research Institute (RoRI) .

De Bont   A. ( 2014 ) ‘ Beoordelen Bekeken. Reflecties op het Werk van Een Programmacommissie van ZonMw ’, ZonMw .

De Rijcke   S. , Wouters   P. F. , Rushforth   A. D. , et al.  ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—a Literature Review ’, Research Evaluation , 25 : 161 – 9 .

De Wit   A. M. , Bloemkolk   D. , Teunissen   T. , et al.  ( 2016 ) ‘ Voorwaarden voor Succesvolle Betrokkenheid van Patiënten/cliënten bij Medisch Wetenschappelijk Onderzoek ’, Tijdschrift voor Sociale Gezondheidszorg , 94 : 91 – 100 .

Del Carmen Calatrava Moreno   M. , Warta   K. , Arnold   E. , et al.  ( 2019 ) Science Europe Study on Research Assessment Practices . Technopolis Group Austria .

Google Preview

Demicheli   V. and Di Pietrantonj   C. ( 2007 ) ‘ Peer Review for Improving the Quality of Grant Applications ’, Cochrane Database of Systematic Reviews , 2 : MR000003.

Den Oudendammer   W. M. , Noordhoek   J. , Abma-Schouten   R. Y. , et al.  ( 2019 ) ‘ Patient Participation in Research Funding: An Overview of When, Why and How Amongst Dutch Health Funds ’, Research Involvement and Engagement , 5 .

Diabetesfonds ( n.d. ) Maatschappelijke Adviesraad < https://www.diabetesfonds.nl/over-ons/maatschappelijke-adviesraad > accessed 18 Sept 2022 .

Dijstelbloem   H. , Huisman   F. , Miedema   F. , et al.  ( 2013 ) ‘ Science in Transition Position Paper: Waarom de Wetenschap Niet Werkt Zoals het Moet, En Wat Daar aan te Doen Is ’, Utrecht : Science in Transition .

Forsyth   D. R. ( 1999 ) Group Dynamics , 3rd edn. Belmont : Wadsworth Publishing Company .

Geurts   J. ( 2016 ) ‘ Wat Goed Is, Herken Je Meteen ’, NRC Handelsblad < https://www.nrc.nl/nieuws/2016/10/28/wat-goed-is-herken-je-meteen-4975248-a1529050 > accessed 6 Mar 2022 .

Glerup   C. and Horst   M. ( 2014 ) ‘ Mapping “Social Responsibility” in Science ’, Journal of Responsible Innovation , 1 : 31 – 50 .

Hartmann   I. and Neidhardt   F. ( 1990 ) ‘ Peer Review at the Deutsche Forschungsgemeinschaft ’, Scientometrics , 19 : 419 – 25 .

Hirschauer   S. ( 2010 ) ‘ Editorial Judgments: A Praxeology of “Voting” in Peer Review ’, Social Studies of Science , 40 : 71 – 103 .

Hughes   A. and Kitson   M. ( 2012 ) ‘ Pathways to Impact and the Strategic Role of Universities: New Evidence on the Breadth and Depth of University Knowledge Exchange in the UK and the Factors Constraining Its Development ’, Cambridge Journal of Economics , 36 : 723 – 50 .

Huutoniemi   K. ( 2012 ) ‘ Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals ’, Social Studies of Science , 42 : 897 – 921 .

Jasanoff   S. ( 2011 ) ‘ Constitutional Moments in Governing Science and Technology ’, Science and Engineering Ethics , 17 : 621 – 38 .

Kolarz   P. , Arnold   E. , Farla   K. , et al.  ( 2016 ) Evaluation of the ESRC Transformative Research Scheme . Brighton : Technopolis Group .

Lamont   M. ( 2009 ) How Professors Think : Inside the Curious World of Academic Judgment . Cambridge : Harvard University Press .

Lamont   M. Guetzkow   J. ( 2016 ) ‘How Quality Is Recognized by Peer Review Panels: The Case of the Humanities’, in M.   Ochsner , S. E.   Hug , and H.-D.   Daniel (eds) Research Assessment in the Humanities , pp. 31 – 41 . Cham : Springer International Publishing .

Lamont   M. Huutoniemi   K. ( 2011 ) ‘Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels’, in C.   Charles   G.   Neil and L.   Michèle (eds) Social Knowledge in the Making , pp. 209–32. Chicago : The University of Chicago Press .

Langfeldt   L. ( 2001 ) ‘ The Decision-making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome ’, Social Studies of Science , 31 : 820 – 41 .

——— ( 2006 ) ‘ The Policy Challenges of Peer Review: Managing Bias, Conflict of Interests and Interdisciplinary Assessments ’, Research Evaluation , 15 : 31 – 41 .

Lee   C. J. , Sugimoto   C. R. , Zhang   G. , et al.  ( 2013 ) ‘ Bias in Peer Review ’, Journal of the American Society for Information Science and Technology , 64 : 2 – 17 .

Liu   F. Maitlis   S. ( 2010 ) ‘Nonparticipant Observation’, in A. J.   Mills , G.   Durepos , and E.   Wiebe (eds) Encyclopedia of Case Study Research , pp. 609 – 11 . Los Angeles : SAGE .

Luo   J. , Ma   L. , and Shankar   K. ( 2021 ) ‘ Does the Inclusion of Non-academix Reviewers Make Any Difference for Grant Impact Panels? ’, Science & Public Policy , 48 : 763 – 75 .

Luukkonen   T. ( 2012 ) ‘ Conservatism and Risk-taking in Peer Review: Emerging ERC Practices ’, Research Evaluation , 21 : 48 – 60 .

Macleod   M. R. , Michie   S. , Roberts   I. , et al.  ( 2014 ) ‘ Biomedical Research: Increasing Value, Reducing Waste ’, The Lancet , 383 : 101 – 4 .

Meijer   I. M. ( 2012 ) ‘ Societal Returns of Scientific Research. How Can We Measure It? ’, Leiden : Center for Science and Technology Studies, Leiden University .

Merton   R. K. ( 1968 ) Social Theory and Social Structure , Enlarged edn. [Nachdr.] . New York : The Free Press .

Moher   D. , Naudet   F. , Cristea   I. A. , et al.  ( 2018 ) ‘ Assessing Scientists for Hiring, Promotion, And Tenure ’, PLoS Biology , 16 : e2004089.

Olbrecht   M. and Bornmann   L. ( 2010 ) ‘ Panel Peer Review of Grant Applications: What Do We Know from Research in Social Psychology on Judgment and Decision-making in Groups? ’, Research Evaluation , 19 : 293 – 304 .

Patiëntenfederatie Nederland ( n.d. ) Ervaringsdeskundigen Referentenpanel < https://www.patientenfederatie.nl/zet-je-ervaring-in/lid-worden-van-ons-referentenpanel > accessed 18 Sept 2022.

Pier   E. L. , M.   B. , Filut   A. , et al.  ( 2018 ) ‘ Low Agreement among Reviewers Evaluating the Same NIH Grant Applications ’, Proceedings of the National Academy of Sciences , 115 : 2952 – 7 .

Prinses Beatrix Spierfonds ( n.d. ) Gebruikerscommissie < https://www.spierfonds.nl/wie-wij-zijn/gebruikerscommissie > accessed 18 Sep 2022 .

( 2020 ) Private Non-profit Financiering van Onderzoek in Nederland < https://www.rathenau.nl/nl/wetenschap-cijfers/geld/wat-geeft-nederland-uit-aan-rd/private-non-profit-financiering-van#:∼:text=R%26D%20in%20Nederland%20wordt%20gefinancierd,aan%20wetenschappelijk%20onderzoek%20in%20Nederland > accessed 6 Mar 2022 .

Reneman   R. S. , Breimer   M. L. , Simoons   J. , et al.  ( 2010 ) ‘ De toekomst van het cardiovasculaire onderzoek in Nederland. Sturing op synergie en impact ’, Den Haag : Nederlandse Hartstichting .

Reed   M. S. , Ferré   M. , Marin-Ortega   J. , et al.  ( 2021 ) ‘ Evaluating Impact from Research: A Methodological Framework ’, Research Policy , 50 : 104147.

Reijmerink   W. and Oortwijn   W. ( 2017 ) ‘ Bevorderen van Verantwoorde Onderzoekspraktijken Door ZonMw ’, Beleidsonderzoek Online. accessed 6 Mar 2022.

Reijmerink   W. , Vianen   G. , Bink   M. , et al.  ( 2020 ) ‘ Ensuring Value in Health Research by Funders’ Implementation of EQUATOR Reporting Guidelines: The Case of ZonMw ’, Berlin : REWARD|EQUATOR .

Reinhart   M. ( 2010 ) ‘ Peer Review Practices: A Content Analysis of External Reviews in Science Funding ’, Research Evaluation , 19 : 317 – 31 .

Reinhart   M. and Schendzielorz   C. ( 2021 ) Trends in Peer Review . SocArXiv . < https://osf.io/preprints/socarxiv/nzsp5 > accessed 29 Aug 2022.

Roumbanis   L. ( 2017 ) ‘ Academic Judgments under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups ’, Social Studies of Science , 47 : 95 – 116 .

——— ( 2021a ) ‘ Disagreement and Agonistic Chance in Peer Review ’, Science, Technology & Human Values , 47 : 1302 – 33 .

——— ( 2021b ) ‘ The Oracles of Science: On Grant Peer Review and Competitive Funding ’, Social Science Information , 60 : 356 – 62 .

( 2019 ) ‘ Ruimte voor ieders talent (Position Paper) ’, Den Haag : VSNU, NFU, KNAW, NWO en ZonMw . < https://www.universiteitenvannederland.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Ruimte-voor-ieders-talent.pdf >.

( 2013 ) San Francisco Declaration on Research Assessment . The Declaration . < https://sfdora.org > accessed 2 Jan 2022 .

Sarewitz   D. and Pielke   R. A.  Jr. ( 2007 ) ‘ The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science ’, Environmental Science & Policy , 10 : 5 – 16 .

Scholten   W. , Van Drooge   L. , and Diederen   P. ( 2018 ) Excellent Is Niet Gewoon. Dertig Jaar Focus op Excellentie in het Nederlandse Wetenschapsbeleid . The Hague : Rathenau Instituut .

Shapin   S. ( 2008 ) The Scientific Life : A Moral History of a Late Modern Vocation . Chicago : University of Chicago press .

Spaapen   J. and Van Drooge   L. ( 2011 ) ‘ Introducing “Productive Interactions” in Social Impact Assessment ’, Research Evaluation , 20 : 211 – 8 .

Travis   G. D. L. and Collins   H. M. ( 1991 ) ‘ New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System ’, Science, Technology & Human Values , 16 : 322 – 41 .

Van Arensbergen   P. and Van den Besselaar   P. ( 2012 ) ‘ The Selection of Scientific Talent in the Allocation of Research Grants ’, Higher Education Policy , 25 : 381 – 405 .

Van Arensbergen   P. , Van der Weijden   I. , and Van den Besselaar   P. V. D. ( 2014a ) ‘ The Selection of Talent as a Group Process: A Literature Review on the Social Dynamics of Decision Making in Grant Panels ’, Research Evaluation , 23 : 298 – 311 .

—— ( 2014b ) ‘ Different Views on Scholarly Talent: What Are the Talents We Are Looking for in Science? ’, Research Evaluation , 23 : 273 – 84 .

Van den Brink , G. , Scholten , W. , and Jansen , T. , eds ( 2016 ) Goed Werk voor Academici . Culemborg : Stichting Beroepseer .

Weingart   P. ( 1999 ) ‘ Scientific Expertise and Political Accountability: Paradoxes of Science in Politics ’, Science & Public Policy , 26 : 151 – 61 .

Wessely   S. ( 1998 ) ‘ Peer Review of Grant Applications: What Do We Know? ’, The Lancet , 352 : 301 – 5 .

Supplementary data

Month: Total Views:
April 2023 723
May 2023 266
June 2023 152
July 2023 130
August 2023 355
September 2023 189
October 2023 198
November 2023 181
December 2023 153
January 2024 197
February 2024 222
March 2024 227
April 2024 218
May 2024 229
June 2024 134
July 2024 123
August 2024 155
September 2024 42

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

How Do I Review Thee? Let Me Count the Ways: A Comparison of Research Grant Proposal Review Criteria Across US Federal Funding Agencies

While Elizabeth Barrett Browning counted 25 ways in which she loves her husband in her poem, “How Do I Love Thee? Let me Count the Ways,” we identified only eight ways to evaluate the potential for success of a federal research grant proposal. This may be surprising, as it seems upon initial glance of the review criteria used by various federal funding agencies that each has its own distinct set of “rules” regarding the review of grant proposals for research and scholarship. Much of the grantsmanship process is dependent upon the review criteria, which represent the funders’ desired impact of the research. But since most funders that offer research grants share the overarching goals of supporting research that (1) fits within its mission and (2) will bring a strong return on its financial investment, the review criteria used to evaluate research grant proposals are based on a similar set of fundamental questions. In this article, we compare the review criteria of 10 US federal agencies that support research through grant programs, and demonstrate that there are actually only a small and finite number of ways that a grant proposal can be evaluated. Though each funding agency may use slightly different wording, we found that the majority of the agencies’ criteria address eight key questions. Within the highly competitive landscape of research grant funding, new researchers must find support for their research agendas and established investigators and research development offices must consider ways to diversify their funding portfolios, yet all may be discouraged by the apparent myriad of differences in review criteria used by various funding agencies. Guided by research administrators and research development professionals, recognizing that grant proposal review criteria are similar across funding agencies may help lower the barrier to applying for federal funding for new and early career researchers, or facilitate funding portfolio diversification for experienced researchers. Grantmakers are furthermore provided valuable guidance to develop and refine their own proposal review criteria.

Introduction

The research funding landscape in the United States is highly competitive, with flat or shrinking budgets for investigator-initiated research programs at most federal agencies ( American Association for the Advancement of Science (AAAS), 2014) . Taking biomedical research as an example, in 2014, the National Institutes of Health (NIH) budgeted $15 billion to fund research project grants, an amount that has essentially remained the same since 2003 ( AAAS, 2014 ; Federation of American Societies for Experimental Biology, 2014 ). At the same time, the number of research grant applications has steadily increased, from close to 35,000 in 2003 to 51,000 in 2014. The result has been a stunning 30% drop in funding success rates, from 30.2% in 2003 to 18.8% in 2014. Other federal agencies that fund research, including the National Science Foundation (NSF), Office of Veterans Affairs (VA), and Department of Defense (DoD), are feeling the similar sting of budget restrictions.

Within this tenuous funding environment, it has become essential that investigators and research development offices sustain their research programs by continuing to encourage new researchers to apply for grant support and encouraging established researchers to diversify their funding portfolios. New researchers benefit from clear information about the federal grant process, and experienced researchers benefit from considering funding opportunities from federal funding agencies, national organizations and advocacy groups, state agencies, private philanthropic organizations, regional or local special interest groups, corporations, and internal institutional grant competitions that may not be their typical targets for support. With increasing competition for grant funding, investigators who might be accustomed to one set of rules for preparing grant proposals may become quickly overwhelmed by the prospect of learning entirely new sets of rules for different funding agencies.

Yet this process is not as daunting if we start from the perspective that any funder that offers research grants has essentially the same goal: to support research that fits within its mission and will bring a strong return on its financial investment ( Russell & Morrison, 2015 ). The review criteria used to evaluate research grant proposals reflect the funder’s approach to identifying the most relevant and impactful research to support ( Geever, 2012 ; Gerin & Kapelewski, 2010 ; Kiritz, 2007 ). Thus, planning and preparing a successful grant proposal depends on a clear understanding of the review criteria that will be used. These criteria directly inform how the proposal content should be presented and how much space should be afforded to each section of the proposal, as well as which keywords should be highlighted. It may seem that each funder—federal, state, local, private—has its own distinct set of rules regarding the preparation and review of grant proposals, and that each funder uses specific jargon in its review process. However, because all funders aim to support research that is relevant and impactful, we suggest that the mandatory review criteria used to evaluate research grant proposals are based on a set of fundamental questions, such as: Does this research fit within the funder’s mission? Will the results of this research fill a gap in knowledge or meet an unmet need? Do the investigators have the skills and resources necessary to carry out the research?

In this article, we examine the research grant proposal review criteria used by 10 US federal agencies to demonstrate that there exist only a small and finite number of ways that federal research grant proposals are actually evaluated. Our goal is to help research administrators and research development professionals empower investigators to more confidently navigate funder review criteria, thereby lowering the barrier to first-time applicants or to grant portfolio diversification for more established researchers. Recognizing that research proposal review criteria are aligned across federal funding agencies can also help proposal writers who might be faced with other funding opportunities in which the review criteria are not clearly defined. On the flip side of that equation, understanding that review criteria are based on the same core goals can help grantmakers as they develop and refine review criteria for their funding opportunities.

Observations

We performed an online search of 10 US federal agencies’ (NIH, NSF, VA, Department of Education [ED], DoD, National Aeronautics and Space Administration [NASA], Department of Energy [DOE], United States Department of Agriculture [USDA], National Endowment for the Humanities [NEH], and National Endowment for the Arts [NEA]) websites to identify policies and procedures related to their research grant proposal review process. The NIH Office of Extramural research (OER) website provided the greatest detail and transparency with regard to the review criteria and review process used for evaluating research grant proposals ( National Institutes of Health, 2008a ; 2008b ; 2015a ), and served as a starting point for our analysis of the review criteria for the other nine agencies. We developed key questions corresponding to each of the NIH review criteria, and then aligned the review criteria of the remaining nine agencies with these key questions.

Federal grant program guidance and policy changes occur frequently; the links to online resources for research grant proposal policies for each of the various funding agencies included in our analysis were current as of August 10, 2015. Note that our analysis includes information from the National Institute on Disability and Rehabilitation Research (NIDRR) program as administered by ED. On June 1, 2015, the NIDRR was transferred from ED to the Administration for Community Living (ACL) in the US Department of Health and Human Services (DHHS), and is now called the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) Field-Initiated Program. Our analysis of NIDRR was current as of May 4, 2015.

Also note that there is variability between different research grant programs within each federal agency. We included in our analysis review criteria from the DoD Congressionally Directed Medical Research Programs (CDMRP), the USDA National Institute of Food and Agriculture, the NEH Digital Humanities Start-up program, and the NEA ART WORKS program. Criteria for NASA research programs were compiled from numerous NASA Research Announcements.

The NIH review criteria

The NIH criteria emphasize clinical, interdisciplinary, and translational biomedical research ( National Institutes of Health, 2008a ). Reviewers are instructed to evaluate research grant proposals based on how well five core review criteria are met: Significance, Innovation, Approach, Investigator(s), and Environment ( Table 1 ) ( National Institutes of Health, 2015a ; 2015b ). Assigned reviewers consider each of the five core review criteria and assign a separate score for each using a 9-point scale. These ratings are included in a summary statement that is provided to the researcher, whether or not the entire study section ultimately discusses the proposal.

The NIH core review criteria for research project grant proposals a

Review CriterionKey Question
SignificanceWhy does the research matter?
InnovationHow is the research new?
ApproachHow will the research be done?
EnvironmentIn what context will the research be done (e.g., facilities, resources,
equipment, and institutional support)?
InvestigatorWhat is special about the people doing the research?
Overall Impact What is the return on investment?

NIH, National Institutes of Health.

Each of the five core review criteria can be simplified into a general question. The Significance criterion asks reviewers to consider “Why does the research matter?” Reviewers look for whether the proposed project will address an important problem or critical barrier to progress in the field, and whether the knowledge gained from the proposed research will advance scientific knowledge, technical capacity, or clinical practice to drive the field forward. Innovation translates into “How is the research new?” Reviewers consider how the proposed research challenges current thinking with novel concepts, approaches, tools, or treatments. Approach asks, “How will the research be done?” Reviewers assess the proposed research strategy, methodology, and analyses and determine whether they are appropriate to achieve the aims of the project, and how riskier aspects of the proposal might be handled with alternative approaches. The remaining two core criteria evaluate the context in which the research will be done—defined as the collective set of resources, equipment, institutional support, and facilities available (Environment)—and what is special about the people doing the research (Investigator). For the Environment criterion, reviewers evaluate whether the resources and institutional support available to the investigators are sufficient to ensure successful completion of the research aims, including any unique features such as access to specific subject populations or collaborative arrangements. For the Investigator criterion, reviewers determine whether the primary investigator (PI), other researchers, and any collaborators have the experience and training needed to complete the proposed research, as well as how collaborators will combine their skills and work together.

The five core review criteria ratings, in addition to other proposal-specific criteria, are then used to determine an Overall Impact/Priority Score ( National Institutes of Health, 2015a ; 2015b ). This score reflects the reviewers’ assessment of the “likelihood for the project to exert a sustained, powerful influence on the research field(s) involved.” An application does not need to have exemplary scores in all criteria in order to be judged as likely to have a high overall impact. For example, a project that by its nature is not highly innovative may nevertheless be deemed essential to advance knowledge within a field. A 2011 study by the National Institutes of General Medicine Science (NIGMS) examined the correlation between the core review criteria scores and the Overall Impact score and found that reviewers weighted certain criteria more heavily than others, in the following order: Approach > Significance > Innovation > Investigator > Environment ( Rockey, 2011 ). Thus, the quality of ideas appeared to matter more than investigator reputation, a particularly good finding for new investigators ( Berg, 2010a ; 2010b ; 2010c ). These findings of relative importance of the core review criteria by reviewers also suggest that, in terms of space, it makes sense for proposers to utilize more pages of the proposal narrative to address aspects of their approach and the research project’s significance than on the environment supporting the project.

Other agencies have formalized systems for weighting grant proposal review criteria. For example, the ED NIDRR standard selection criteria are weighted using a points designation ( US Department of Education, 2014 ): Design of Research Activities (50 pts); Importance of the Problem (15 pts); Project Staff (15 pts); Plan of Evaluation (10 pts); and Adequacy and Accessibility of Resources (10 pts). Similar to NIH reviewers, ED weights research design and the importance of the problem more heavily than staff or resources when evaluating grant proposals ( Committee on the External Evaluation of NIDRR and Its Grantees, National Research Council, Rivard, O’Connell, & Wegman, 2011 ).

How do the NIH review criteria compare to those of other federal agencies?

The most straightforward comparison of research grant review criteria is between the NIH and NSF, which together make up 25% of the research and development budget in the US ( AAAS, 2014 ). The NSF criteria emphasize transformative and interdisciplinary research ( National Science Foundation, 2007 ), and involve three (3) guiding principles , two (2) review criteria , and five (5) review elements ( National Science Foundation, 2014 ). The two review criteria used by the NSF are Intellectual Merit, which encompasses the potential to advance the field, and Broader Impacts, which encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes. Within each of these two review criteria are five review elements ( Figure 1 ). These five review elements line up remarkably well with the NIH core review criteria ( Table 2 ), with both agencies’ criteria addressing a similar set of concepts but using distinct language to describe each criterion.

An external file that holds a picture, illustration, etc.
Object name is nihms-770392-f0001.jpg

NSF Merit Review Criteria ( National Science Foundation, 2014 )

Comparison of the NIH and NSF research grant proposal review criteria

Key QuestionNIH Core Review Criteria NSF Review Elements
Why does the
research matter?
Significance – project addresses an
important problem or a critical
barrier to progress in the field
Intellectual Merit - Potential of
the activity to advance
knowledge and understanding

Broader Impact – Potential of the
activity to benefit society
How is the research
new?
Innovation – project challenges
current paradigms by utilizing
novel theoretical concepts,
approaches or methodologies,
instrumentation, or interventions
Creative, original, and
transformative concepts and
activities
How will the research
be done?
Approach - overall strategy,
methodology, and analyses well-
reasoned and appropriate to
accomplish the specific aims of the
project
Well-reasoned, well-organized,
rational plan for carrying out
proposed activities and
mechanism to assess success
In what context will
the research be done?
Environment - scientific
environment in which the work
will be done contribute to the
probability of success
Adequate resources available to
carry out the proposed activities
What is special about
the people doing the
research?
Investigators - PD/PIs,
collaborators, and other researchers
are well suited to the project
Qualified individual, team, or
institution conducting the
proposed activities
What is the return on
investment?
Overall Impact - likelihood for the
project to exert a sustained,
powerful influence on the research
field(s) involved
The potential to benefit society
and contribute to the
achievement of specific, desired
societal outcomes

NIH, National Institutes of Health; NSF, National Science Foundation; PD, program director; PI, principal investigator.

What about a non-science funding agency like the NEH? While there is some variability between individual NEH grant programs, the NEH application review criteria are: Humanities Significance, Project Feasibility and Work Plan, Quality of Innovation, Project Staff Qualifications, and Overall Value to Humanities Scholarship ( National Endowment for the Humanities, 2015a ; 2015b ). The significance of the project includes its potential to enhance research, teaching, and learning in the humanities. The quality of innovation is evaluated in terms of the idea, approach, method, or digital technology (and the appropriateness of the technology) that will be used in the project. Reviewers also examine the qualifications, expertise, and levels of commitment of the project director and key project staff or contributors. The quality of the conception, definition, organization, and description of the project and the applicant’s clarity of expression, as well as the feasibility of the plan of work are also assessed. Finally, reviewers consider the likelihood that the project will stimulate or facilitate new research of value to scholars and general audiences in the humanities. Table 3 shows the NEH review criteria compared with those used by the NIH and NSF. Though there is not an exact match for the key question “In what context will the research be done?” (i.e., the research environment and available resources), this is evaluated in NEH proposals as part of the Project Feasibility and Work Plan.

Comparison of research grant proposal review criteria used by the NIH, NSF, and NEH

Key QuestionNIH Core
Criteria
NSF Merit Review Elements NEH Application
Review Criteria
Why does the
research matter?
SignificanceIntellectual Merit - Potential of
the activity to advance knowledge
and understanding

Broader Impact – Potential of the
activity to benefit society
Humanities
Significance
How is the
research new?
InnovationCreative, original, and
transformative concepts and
activities
Quality of
Innovation
How will the
research be done?
ApproachWell-reasoned, well-organized,
rational plan for carrying out
proposed activities and
mechanism to assess success
Project Feasibility
and Work Plan
In what context
will the research
be done?
EnvironmentAdequate resources available to
carry out the proposed activities
Project Feasibility
and Work Plan
What is special
about the people
doing the
research?
InvestigatorsQualified individual, team, or
institution conducting the
proposed activities
Project Staff
Qualifications
What is the return
on investment?
Overall Impact The potential to benefit society
and contribute to the achievement
of specific, desired societal
outcomes
Overall Value to
Humanities
Scholarship

NIH, National Institutes of Health; NSF, National Science Foundation; NEH, National Endowment for the Humanities.

Comparing review criteria across federal agencies: Eight key questions

In addition to the core review criteria mentioned above, funding agencies also typically ask reviewers to consider the project budget and the approach that will be used to evaluate project success. When we expanded the comparison of research grant proposal review criteria across 10 US federal agencies, and included the budget and evaluation criteria, we revealed that all of the agencies’ review criteria aligned with a consistent set of eight key questions that reviewers consider when evaluating any type of research proposal ( Table 4 ).

Eight key questions considered by reviewers of research grant proposals and the associated review criteria terms used by 10 US federal funding agencies

Key QuestionReview Criteria Terms
Why does it matter?Significance
Importance
How is it new?Innovation
Novelty
Creativity
How will it be done?Approach
Plan
Methodology
Objectives
Aims
In what context will it be done?Environment
Resources
Populations
Facilities
What is special about the people involved?Investigators
Organization
People
Researchers
Personnel
Partners
Collaborators
Staff
What is the return on investment?Impact
Value
Relevance
How effectively will the financial resources be
managed?
Budget
How will success be determined?Evaluation
Assessment

The research grant proposal review criteria used by the 10 federal funding agencies are associated with these eight key questions ( Table 5 ). We have already demonstrated that the question, “Why does it matter?”—which addresses the importance or significance of the proposed project— applies to similar review criteria from the NIH (Significance), NSF (Intellectual Merit), and the NEH (Humanities Significance) ( National Endowment for the Humanities, 2015a ; 2015b ; National Institutes of Health, 2015a , 2015b ; National Science Foundation, 2014 ). Likewise, ED evaluates the “Importance of the Problem” ( US Department of Education, 2014 ); the DoD application review criteria includes “Importance” ( Department of Defense, 2015 ); the VA and NASA each evaluate “Significance” ( National Aeronautics and Space Administration, 2015 ; US Department of Veterans Affairs, 2015 ); the DOE looks at “Scientific and Technological Merit” ( US Department of Energy, 2015 ); the USDA evaluates “Project Relevance” ( United States Department of Agriculture, 2015 ); and the NEA assesses “Artistic Excellence” ( National Endowment for the Arts, 2015 ). There are also parallels in the language used by each of the funders as they ask reviewers to assess proposed research project innovation or novelty, the approach or methodology to be used, the investigators or personnel involved, the environment and resources available, and the overall impact or value of the project ( Table 5 ).

Comparison of research grant proposal review criteria across 10 US federal funding agencies

Federal Agency
Key QuestionNIHNSFVAED DoD NASA DOEUSDA NEH NEA
Why does it
matter?
SignificanceIntellectual Merit:
potential of the
activity to advance
knowledge and
understanding

Broader Impact:
potential of the
activity to benefit
society
SignificanceImportance of the
Problem

Responsiveness to
Absolute Priority
ImportanceSignificanceScientific and
Technical Merit
RelevanceHumanities
Significance
Artistic Excellence:
artistic significance
How is it
new?
InnovationCreative, original,
and transformative
concepts and
activities
InnovationResponsiveness to
Absolute Priority
InnovationUnique and
innovative
methods,
approaches,
concepts, or
advanced
technologies
Innovative
methods,
approaches,
concepts, or
advanced
technologies
Scientific
Merit: novelty,
innovation,
uniqueness,
originality
The quality of
innovation in terms
of the idea,
approach, method,
or digital
technology
Artistic Merit: extent
to which the project
deepens and extends
the arts' value
How will it be
done?
ApproachWell-reasoned, well-
organized, rational
plan
Scientific
Approach
Quality of Project
Design

Technical Assistance

Design of
Dissemination
Research
Strategy and
Feasibility
Overall
scientific or
technical merit
Technical
Approach
Scientific
Merit:
conceptual
adequacy,
clarity of
objectives,
feasibility
Project’s
feasibility,
design, cost, and
work plan
Artistic Merit: quality
and clarity of project
goals and design
In what
context will it
be done?
EnvironmentAdequate resources
available to carry out
the proposed
activities
Feasibility:
environment
available to
conduct the
studies
Adequacy and
Accessibility of
Resources
EnvironmentCapabilities,
related
experience, and
facilities
Feasibility:
Technical and
Management
Capabilities
Adequacy of
Facilities and
Project
Management
N/AArtistic Merit:
resources involved
What is
special about
the people
involved?
InvestigatorQualified individual,
team, or institution
conducting the
proposed activities
Feasibility:
expertise of
the PI and
collaborators
Project Staff and
Training
PersonnelQualifications,
capabilities, and
experience of
the PI, team
leader, or key
personnel
Feasibility:
Technical and
Management
Capabilities
Qualifications
of Project
Personnel
Qualifications,
expertise, and levels
of commitment of
the project director
and key
project staff or
contributors
Artistic Excellence:
quality of the artists,
art organizations, arts
education providers,
works of art, or
services

Artistic Merit: project
personnel
What is the
return on
investment?
Overall Impact Broader Impact:
potential to benefit
society and
contribute to the
achievement of
specific, desired
societal outcomes
Relevance to
the healthcare
of veterans
Design of
Dissemination
Activities
ImpactRelevanceN/ARelevance and
Importance to
US agriculture
Likelihood of
stimulating or
facilitating new
research in the
humanities
Artistic Merit:
potential impact on
artists, the artistic
field, and the
organization's
community
How
effectively
will the
financial
resources be
managed?
BudgetN/AN/AAdequacy and
Reasonableness of
the Budget
BudgetEvaluation of
cost
Reasonableness
and
appropriateness
of the proposed
budget
N/AProject’s feasibility,
design, cost, and
work plan
Artistic Merit:
appropriateness of the
budget
How will
success be
determined?
N/AMechanism to assess
success
N/APlan of EvaluationN/AEvaluation
against the
state-of-the-art
N/AN/AN/AArtistic Merit:
appropriateness of the
proposed performance
measurements

NIH, National Institutes of Health; NSF, National Science Foundation; VA, Department of Veterans Affairs; ED, Department of Education; DoD, Department of Defense; NASA, National Aeronautics and Space Administration; DOE, Department of Education; USDA, US Department of Agriculture; NEH, National Endowment for the Humanities; NEA, National Endowment for the Arts; N/A, not applicable.

While all the agencies’ collective review criteria fall within the eight key questions, there is some variability across agencies. For example, the DOE does not have a clear review criterion for evaluating the overall impact or value of a project, equivalent to the key question “What is the return on investment?” Some agencies to do not explicitly include the budget as part of their review criteria, such as the NSF, VA, and USDA, while other agencies do not specifically ask for a plan to evaluate success of the project, including the NIH, VA, DoD, DOE, USDA, or NEH. Funders may also have unique review criteria. Unlike the other nine agencies evaluated, the DoD uses the review criterion “Application Presentation,” which assesses the writing, clarity, and presentation of the application components. Agencies may also have mission- or program-specific review criteria; for example, for certain applications, the NEA may evaluate the potential to reach underserved populations as part of “Artistic Merit.” Despite these differences, it is clear that for the 10 federal funding agencies examined, the review criteria used to evaluate research grant proposals are extraordinarily aligned.

If we remember that all funding agencies are trying to evaluate research grant proposals to reach the same goals—to determine which projects fit within their mission and will provide a return on their financial investment—it is perhaps not all that surprising that the review criteria that federal funding agencies use are aligned. We further propose that funding announcements from any funder, including state agencies, local groups, and private philanthropic organizations, similarly ask for research grant proposals to answer some, if not all, of the eight key questions that emerged from our analysis of US federal funding agencies. Keeping these key questions in mind can help research administrators and research development offices, as well as proposal writers, decipher research grant proposal review criteria from almost any funding agency, thereby facilitating proposal development.

For this article, we limited our analysis to the review criteria used across different US federal funders to evaluate research grant proposals, and did not include criteria used for other federal funding mechanisms, such as training grants or contract proposals. NIH has compared the review criteria used across their various funding mechanisms, including research grants, grants for conferences and scientific meetings, small business innovation or technology transfer grants, fellowship and career development grants, and training grants, among others ( National Institutes of Health, 2014 ). Again, while there are differences in the language used to describe each core review criterion across the various grant mechanisms, the concepts being reviewed—what is being done, why it is being done, how it is new, who is doing the work, and where it will be done—are essentially the same across each mechanism.

We have demonstrated that research grant proposal review criteria are remarkably aligned across 10 US federal funding agencies, despite the differences in their missions and the terminology each uses for its own review process ( Table 5 ). Moreover, a set of only eight key questions summarizes the collective research grant proposal review criteria across all these federal agencies. While the sheer number of non-federal funding opportunities makes a similar comparative analysis of their review criteria impractical, we suggest that the eight key questions emerging from our analysis provide a starting point for researchers, research administrators, and funders to assess the review criteria used by most, if not all, other research funding opportunities. This is reasonable given that each funder is trying to achieve the same goal during the grant review process: find those research projects that fit the funder’s mission and are worth its investment. Through this lens, the review criteria used for research proposals across agencies are easier to understand and address, which may encourage new investigators to apply for funding, and seasoned investigators and research development offices to consider a diversified set of funding sources for their research portfolios. We also hope that this analysis provides guidance to other grantmakers as they develop review criteria for their own funding opportunities. For the 10 US federal agencies included here, we hope that the analysis serves as a starting point to develop even greater consistency across the review criteria—perhaps even a single canonic, cross-agency set of review criteria—used to evaluate federal research grant proposals.

Acknowledgments

Author’s Note

The authors would like to thank Amy Lamborg, MS, MTSC, for providing invaluable insights and for reviewing the manuscript.

The work is based on material developed by HJF-K for the Grantsmanship for the Research Professionals course at Northwestern University School of Professional Studies (SCS PHIL_ NP 380-0), and was presented in part at the National Organization of Research Development Professionals 7th Annual Research Development Conference in Bethesda, MD, April 29- May 1, 2015.

  • American Association for the Advancement of Science Intersociety Working Group. AAAS report XXXIX: Research and development FY 2015. 2014 Retrieved from http://www.aaas.org/page/aaas-report-xxxix-research-and-development-fy-2015 .
  • Berg J. Even more on criterion scores: Full regression and principal component analysis. NIGMS Feedback Loop Blog. 2010a Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/even-more-on-criterion-scores-full-regression-and-principal-component-analyses/
  • Berg J. Model organisms and the significance of significance. NIGMS Feedback Loop Blog. 2010b Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/model-organisms-and-the-significance-of-significance/
  • Berg J. More on criterion scores. NIGMS Feedback Loop Blog. 2010c Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/more-on-criterion-scores/
  • Committee on the External Evaluation of NIDRR and Its Grantees, National Research Council . In: Review of disability and rehabilitation research: NIDRR grantmaking processes and products. Rivard JC, O’Connell ME, Wegman DH, editors. National Academies Press; Washington, DC: 2011. [ PubMed ] [ Google Scholar ]
  • Department of Defense Congressionally directed medical research programs: Funding opportunities. 2015 Retrieved June 17, 2015, from http://cdmrp.army.mil/funding/ prgdefault.shtml .
  • Federation of American Societies for Experimental Biology (FASEB) NIH research funding trends: FY1995-2014. 2014 Retrieved June 17, 2015, from http://www.faseb.org/Policy- and-Government-Affairs/Data-Compilations/NIH-Research-Funding-Trends.aspx .
  • Geever JC. Guide to proposal writing. 6th The Foundation Center; New York, NY: 2012. [ Google Scholar ]
  • Gerin W, Kapelewski CH. Writing the NIH grant proposal: A step-by-step guide. 2nd SAGE Publications, Inc.; Thousand Oaks, CA: 2010. [ Google Scholar ]
  • Kiritz NJ. Program planning and proposal writing. Grantsmanship Center; Los Angeles, CA: 2007. [ Google Scholar ]
  • National Aeronautics and Space Administration Guidebook for proposers responding to a NASA Research Announcement (NRA) or Cooperative Agreement Notice (CAN) 2015 Retrieved June 17, 2015, from http://www.hq.nasa.gov/office/procurement/nraguidebook/ proposer2015.pdf .
  • National Endowment for the Arts ART WORKS guidelines: Application review. 2015 Retrieved June 17, 2015, from http://arts.gov/grants-organizations/art-works/ application-review .
  • National Endowment for the Humanities NEH’s application review process. 2015a Retrieved June 17, 2015, from http://www.neh.gov/grants/application-process#panel .
  • National Endowment for the Humanities Office of Digital Humanities: Digital humanities start-up grants. 2015b Retrieved August 10, 2015, from http://www.neh.gov/files/ grants/digital-humanities-start-sept-16-2015.pdf .
  • National Institutes of Health Enhancing peer review: The NIH announces enhanced review criteria for evaluation of research applications received for potential FY2010 funding. 2008a Retrieved June 17, 2015, from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-09-025.html .
  • National Institutes of Health Enhancing peer review: The NIH announces new scoring procedures for evaluation of research applications received for potential FY2010 funding. 2008b Retrieved June 17, 2015, from http://grants.nih.gov/grants/guide/notice-files/NOT-OD-09-024.html .
  • National Institutes of Health Review criteria at a glance. 2014 Retrieved June 17, 2015, from https://grants.nih.gov/grants/peer/Review_Criteria_at_a_Glance_MasterOA.pdf .
  • National Institutes of Health Office of Extramural Research Support: Peer review process. 2015a Retrieved September 10, 2015, from http://grants.nih.gov/grants/peer_review_process.htm .
  • National Institutes of Health Scoring system and procedure. 2015b Retrieved June 17, 2015, from https://grants.nih.gov/grants/peer/guidelines_general/scoring_system_and_procedure.pdf .
  • National Science Foundation Important notice no. 130: Transformative research. 2007 Retrieved June 17, 2015, from http://www.nsf.gov/pubs/2007/in130/in130.jsp .
  • National Science Foundation Chapter III - NSF proposal processing and review. Grant proposal guide. 2014 Retrieved September 10, 2015, from http://www.nsf.gov/pubs/policydocs/ pappguide/nsf15001/gpg_3.jsp#IIIA .
  • Rockey S. Correlation between overall impact scores and criterion scores. Rock Talk. 2011 Retrieved June 17, 2015, from http://nexus.od.nih.gov/all/2011/03/08/overall-impact-and-criterion-scores/
  • Russell SW, Morrison DC. The grant application writer’s workbook: Successful proposals to any agency. Grant Writers’ Seminars and Workshops, LLC; Buellton, CA: 2015. [ Google Scholar ]
  • U.S. Department of Agriculture National Institute of Food and Agriculture (NIFA) peer review process for competitive grant applications. 2015 Retrieved June 17, 2015, from http://nifa.usda.gov/resource/nifa-peer-review-process-competitive-grant-applications .
  • U.S. Department of Education. Office of Special Education and Rehabilitative Services FY 2014 application kit for new grants under the National Instutite on Disability and Rehabilitation: Field initiated program (research or development) 2014 Retrieved July 8, 2015, from https://www2.ed.gov/programs/fip/2014-133g1-2.doc .
  • U.S. Department of Energy Merit review guide. 2015 Retrieved June 17, 2015, from http://energy.gov/management/downloads/merit-review-guide .
  • U.S. Department of Veterans Affairs Office of Research & Development BLR&D/ CSR&D merit review program. 2015 Retrieved June 17, 2015, from http://www.research.va.gov/ services/shared_docs/merit_review.cfm .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • How to Write a Research Proposal | Examples & Templates

How to Write a Research Proposal | Examples & Templates

Published on October 12, 2022 by Shona McCombes and Tegan George. Revised on September 5, 2024.

Structure of a research proposal

A research proposal describes what you will investigate, why it’s important, and how you will conduct your research.

The format of a research proposal varies between fields, but most proposals will contain at least these elements:

Introduction

Literature review.

  • Research design

Reference list

While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organized and feel confident in the path forward you choose to take.

Table of contents

Research proposal purpose, research proposal examples, research design and methods, contribution to knowledge, research schedule, other interesting articles, frequently asked questions about research proposals.

Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application , or prior to starting your thesis or dissertation .

In addition to helping you figure out what your research can look like, a proposal can also serve to demonstrate why your project is worth pursuing to a funder, educational institution, or supervisor.

Research proposal aims
Show your reader why your project is interesting, original, and important.
Demonstrate your comfort and familiarity with your field.
Show that you understand the current state of research on your topic.
Make a case for your .
Demonstrate that you have carefully thought about the data, tools, and procedures necessary to conduct your research.
Confirm that your project is feasible within the timeline of your program or funding deadline.

Research proposal length

The length of a research proposal can vary quite a bit. A bachelor’s or master’s thesis proposal can be just a few pages, while proposals for PhD dissertations or research funding are usually much longer and more detailed. Your supervisor can help you determine the best length for your work.

One trick to get started is to think of your proposal’s structure as a shorter version of your thesis or dissertation , only without the results , conclusion and discussion sections.

Download our research proposal template

Prevent plagiarism. Run a free check.

Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We’ve included a few for you below.

  • Example research proposal #1: “A Conceptual Framework for Scheduling Constraint Management”
  • Example research proposal #2: “Medical Students as Mediators of Change in Tobacco Use”

Like your dissertation or thesis, the proposal will usually have a title page that includes:

  • The proposed title of your project
  • Your supervisor’s name
  • Your institution and department

The first part of your proposal is the initial pitch for your project. Make sure it succinctly explains what you want to do and why.

Your introduction should:

  • Introduce your topic
  • Give necessary background and context
  • Outline your  problem statement  and research questions

To guide your introduction , include information about:

  • Who could have an interest in the topic (e.g., scientists, policymakers)
  • How much is already known about the topic
  • What is missing from this current knowledge
  • What new insights your research will contribute
  • Why you believe this research is worth doing

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As you get started, it’s important to demonstrate that you’re familiar with the most important research on your topic. A strong literature review  shows your reader that your project has a solid foundation in existing knowledge or theory. It also shows that you’re not simply repeating what other people have already done or said, but rather using existing research as a jumping-off point for your own.

In this section, share exactly how your project will contribute to ongoing conversations in the field by:

  • Comparing and contrasting the main theories, methods, and debates
  • Examining the strengths and weaknesses of different approaches
  • Explaining how will you build on, challenge, or synthesize prior scholarship

Following the literature review, restate your main  objectives . This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

Building a research proposal methodology
? or  ? , , or research design?
, )? ?
, , , )?
?

To finish your proposal on a strong note, explore the potential implications of your research for your field. Emphasize again what you aim to contribute and why it matters.

For example, your results might have implications for:

  • Improving best practices
  • Informing policymaking decisions
  • Strengthening a theory or model
  • Challenging popular or scientific beliefs
  • Creating a basis for future research

Last but not least, your research proposal must include correct citations for every source you have used, compiled in a reference list . To create citations quickly and easily, you can use our free APA citation generator .

Some institutions or funders require a detailed timeline of the project, asking you to forecast what you will do at each stage and how long it may take. While not always required, be sure to check the requirements of your project.

Here’s an example schedule to help you get started. You can also download a template at the button below.

Download our research schedule template

Example research schedule
Research phase Objectives Deadline
1. Background research and literature review 20th January
2. Research design planning and data analysis methods 13th February
3. Data collection and preparation with selected participants and code interviews 24th March
4. Data analysis of interview transcripts 22nd April
5. Writing 17th June
6. Revision final work 28th July

If you are applying for research funding, chances are you will have to include a detailed budget. This shows your estimates of how much each part of your project will cost.

Make sure to check what type of costs the funding body will agree to cover. For each item, include:

  • Cost : exactly how much money do you need?
  • Justification : why is this cost necessary to complete the research?
  • Source : how did you calculate the amount?

To determine your budget, think about:

  • Travel costs : do you need to go somewhere to collect your data? How will you get there, and how much time will you need? What will you do there (e.g., interviews, archival research)?
  • Materials : do you need access to any tools or technologies?
  • Help : do you need to hire any research assistants for the project? What will they do, and how much will you pay them?

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement .

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

A PhD, which is short for philosophiae doctor (doctor of philosophy in Latin), is the highest university degree that can be obtained. In a PhD, students spend 3–5 years writing a dissertation , which aims to make a significant, original contribution to current knowledge.

A PhD is intended to prepare students for a career as a researcher, whether that be in academia, the public sector, or the private sector.

A master’s is a 1- or 2-year graduate degree that can prepare you for a variety of careers.

All master’s involve graduate-level coursework. Some are research-intensive and intend to prepare students for further study in a PhD; these usually require their students to write a master’s thesis . Others focus on professional training for a specific career.

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2024, September 05). How to Write a Research Proposal | Examples & Templates. Scribbr. Retrieved September 13, 2024, from https://www.scribbr.com/research-process/research-proposal/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a problem statement | guide & examples, writing strong research questions | criteria & examples, how to write a literature review | guide, examples, & templates, what is your plagiarism score.

Readex ResearchEvaluating Research Proposals - Readex Research

Evaluating Research Proposals

Comparing proposals “apples-to-apples” is crucial to establishing which one will best meet your needs. Consider these ideas to help you focus on the details that contribute to a successful survey.

Make sure the proposal responds to your objectives.

The proposal process begins well before you ask any research firm for quote. The process really begins with the discussions you and your team have about objectives. What are your goals? What are the decisions you want to make when the project is done and you have data in hand?

Once you have a solid vision of the survey, then it’s time to start talking with potential partners Throughout your conversations, take note: Do the various firms ask you specific questions about your objectives, the group of people you’d like to survey, and your ultimate goals? Do they, indeed, ask about decisions that you wish to make? Details regarding your specific need should always be front and center during the conversations.

Sampling plan.

When reviewing the sampling plan, make sure the proposal mentions sample size, response rate estimates, number of responses, and maximum sampling error. If you’re unsure of the impact these figures have on the quality of your results, ask the researcher. They should be able to explain them in terms you can understand.

Questionnaire.

The quantity and types of information sought from respondents will impact cost. Quantity encompasses the number of questions and number of variables to process. Type refers to how the questions will be processed, the data entry involved and whether all or just some data will be cleaned.

No evaluation is complete until you know the approximate number and types of questions planned for the survey. The number of open-ended questions should be included as well because open-ended questions that capture verbatim responses can impact the response rate and possibly the price of your survey, especially if done by mail.

In addition, make sure the proposal clearly indicates who will develop the questionnaire content. Also, determine if it includes enough collaboration time to be sufficiently customized to meet your particular needs.

Data collection approach.

For online surveys paying attention to the data collection series and who is responsible for sending survey invitations. Multiple emails to sample members can encourage response. As well, the invitation process should be sensitive to data privacy issues such as those indicated by GDPR and others. Proposals for mailed surveys should clearly outline the data collection series and each component of the survey kit.

Data processing.

Any proposal you receive should highlight the steps the research company will take to make sure that the data is accurate and representative. Depending on the type of survey, checking logic, consistency, and outliers can take a significant amount of time. You must have some process noted to identify inconsistent answers for surveys that collect a significant amount of numerical data (salary survey, market studies, budget planning). Finally, some percentage of mailed surveys need to be verified for data entry accuracy.

A straightforward analysis of survey data can meet many objectives. In other cases, a multivariate statistical analysis will provide deeper insights to achieve your objectives— making results easier to use. If your objectives include learning about separate segments of your circulation, crosstabulations should be specified.

Deliverables.

A variety of reporting options exist for a survey. These include but are not limited to data tables, a summary of the results, in-depth analysis, and graphed presentations. As a result, you need to understand exactly what you’ll receive following your survey and in what format.

No surprises!

Make sure the proposal covers all the bases: what you need to do and provide, what the firm will do when they will do it and how much it will cost. There should be no surprises in what you need to supply. No “you need how much letterhead and envelopes?” a week before your survey is scheduled to mail. Review the price carefully and understand what it includes and doesn’t include. As with many things in life, you usually get what you pay for.

Share this:

Related posts:, notes on the pre- and post-survey.

Notes on the Pre- and Post-Survey The Pre-Post survey is used to look at how things may change over time. It may be how brand awareness levels change after a new ad campaign is introduced or how opinions of a political candidate move after a speech. The catalyst to potential change, sometimes called the event […]

The Importance of Questionnaire Design

The Importance of Questionnaire Design Planning a survey requires many steps and decisions along the way: “How many people do I need to survey? How am I going to distribute the survey?” And, while people often figure out what questions they want to ask, many overlook the importance of expert, unbiased questionnaire design. When you […]

Will Color Printing Give Your Survey a Boost?

Will Color Printing Give Your Survey a Boost? Occasionally we are asked if color printing (versus black and white) is better to use as part of a survey mailing? Will this treatment generate more attention and ultimately a better response? Our Opinion: If you’re looking to use color to boost your survey’s response rate, it […]

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

criteria for evaluating a research proposal

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

102k Accesses

47 Citations

70 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

criteria for evaluating a research proposal

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

criteria for evaluating a research proposal

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Cornell University

Search This Site

  • Budget & Planning
  • Common Data Set
  • Substantive change process
  • Brown bags and presentations
  • University Factbook
  • Diversity dashboards
  • Undergraduate
  • Graduate School
  • Professional Schools
  • Medical Division
  • Total Enrollment
  • Undergraduate Enrollment
  • Graduate School Enrollment
  • Professional Schools Enrollment
  • Medical Division Enrollment
  • Tuition and Self-Help
  • Degrees Conferred
  • Academic Staff
  • Non-academic Staff
  • External Environment
  • All undergraduate students
  • Incoming undergraduates
  • Graduating seniors
  • Faculty and academics
  • Submit survey proposals
  • Survey calendar
  • So you want to survey Cornell students…
  • Academic program changes
  • Academic program review

Formal Review of Research Proposals

When is Formal Review Required?

Student & Campus Life research projects that will use substantial resources of the Cornell community must be formally reviewed by the committee before they can be initiated. At a minimum, this includes research that draws participants from a major institutional data base, for example, those maintained by the University Registrar; Office of the Dean of Students; Fraternity, Sorority and Independent Living; and Class Councils. Regardless of how potential participants are to be identified, research that meets the following criteria will also require formal review by the committee:

  • Involves more that 100 participants for a quantitative data collection method (e.g., survey research) or 25 participants for a qualitative data collection method (e.g., focus groups or interviews);
  • Is broader in scope than program evaluation (e.g., asks about more than just program-based experiences or includes individuals who did not participate in the target program or event); and
  • Will require a substantial amount of participants’ time (e.g., protocols that will take more than 10 or 15 minutes to complete, or longitudinal research designs).

Conversely, research projects that are very limited in scope, and research that is conducted exclusively for program evaluation purposes (i.e., research that examines the program-related experiences of students who participate in a specific program or event) will generally be exempt from formal review by the committee.

Submitting a Proposal for Formal Review

The committee meets monthly during the fall, winter and spring semesters to formally review research proposals and conduct related business. At least eight weeks before the anticipated launch date of the project, researchers should submit a  SCLRG research proposal form to Leslie Meyerhoff or Marne Einarson . The proposal form asks for information about the purpose and proposed design of the study, as well as draft versions of data collection instruments. Samples of completed research proposals are available here and here .

The following criteria will be used by the committee to evaluate research proposals:

  • Importance: Does the research address an important issue at Cornell? Will it provide useful information for academic planning or providing services to Cornell students?
  • Content and Design : Does the proposed methodology fit the research question(s)? Are the questions well-constructed and easily understood? Is the instrument of reasonable length? Have the questions been pretested?
  • Population and Sampling Methodology: Who is the target population? Is the sampling methodology appropriate to the research question(s)? Has the same student cohort and/or sample been used in other recent research? Could a smaller sample be drawn to achieve the same objective? How will the researcher(s) gain access to the proposed participants?
  • Timing: Does the proposed timing of the research overlap with or follow closely upon other research directed toward the same population? When were data on this issue last collected at Cornell? Is the data collection period scheduled at a time when students are likely to respond?
  • Data Management and Dissemination: Who will have access to the data? What are the provisions for secure storage of the data? Can data from this research be linked to other data sets? What is the plan for analyzing the data and disseminating the results? How will research results contribute to better decision making? How will research results be shared more broadly?
  • Resources : What resources will be required to conduct this research (e.g., instrument design, Web application development, mail and/or e-mail services, data entry and analysis)? From where will these resources be obtained?
  • Overall Impact: What will be the impact of the study? Are there any conceivable negative impacts on the University? Will the study overburden respondents? Overall, do the expected benefits of the study appear to outweigh the costs?

Based on their evaluation of the research proposal, the committee may decide to:

  • Approve the project as submitted
  • Approve the project with recommendations for changes that must be adopted before the project can be initiated
  • Require revisions and re-submission of the project before approval is granted
  • Reject the project (e.g., the potential benefits of the data do not justify the costs of collection; the research design has weaknesses that cannot be rectified)

IRB Approval

If research results will not be used exclusively for internal purposes (e.g., they will be presented or published beyond Cornell; or used for an undergraduate honors thesis, master’s thesis or doctoral dissertation), researchers may also be required to obtain approval from Cornell’s Institutional Review Board for Human Participants (IRB). IRB approval should be sought after the proposal has been reviewed by the SAS Research Group. The committee should subsequently be informed of the decision of the IRB.

© 2024 Cornell University

If you have a disability and are having trouble accessing information on this website or need materials in an alternate format, contact  [email protected]  for assistance.

Stanford Research Development Office

What Makes a Successful Proposal

Grant proposals are a distinct genre compared to other academic writing. At its heart, a compelling proposal focuses on posing an exciting research question or problem and offering a convincing narrative for how you will use grant funds to answer or solve it. 

Strong proposals typically exhibit several key characteristics:

  • Define a specific, compelling, and carefully vetted concept.
  • Explain its relevance to the academic community and society.
  • Highlight the potential impact and why the research is timely and necessary.
  • Present an innovative and original approach or access to a new corpus/data.
  • Some funding programs are more interested in incremental accomplishments, while others (many that we work with) want more transformative potential.
  • Provide a well-defined and practical plan for carrying out the project, including contingency plans.
  • Clearly outline as applicable the research design, theoretical approach, data collection methods, analysis techniques, and include necessary preliminary data or proof of concept to demonstrate feasibility.
  • Establish clear, achievable goals and objectives.
  • Outline a realistic timeline with milestones.
  • Provide a strategy for continuous evaluation and adaptation.
  • Highlight the expertise of a well-rounded project team with clear roles and responsibilities in the project.
  • Demonstrate strong collaboration and cohesion within the team, showcasing previous related work and publications to reinforce credibility.
  • Tailor the proposal to align with the program’s objectives and priorities.
  • Show how the research fits within the broader mission of the funding agency.
  • Write concisely and avoid jargon. Define specialized terminology if necessary.
  • Ensure that the proposal is accessible to a diverse audience, including non-specialists who may be involved in the review process.
  • Use storytelling techniques to create a compelling narrative.
  • Incorporate visual elements, such as graphics, charts, and figures, to effectively convey complex information.
  • Provide a detailed and realistic budget, demonstrating a clear understanding of resource needs.
  • Justify all expenses, showing how they support the project's success.
  • Outline how you will share results with the academic community and beyond.
  • Include plans for publications, conferences, and using other dissemination channels.
  • Strictly follow the funder’s guidelines, ensuring all required components, including supporting documents and signatures, are included.
  • Consider using strategic formatting elements, such as headers, bold text, and figures or tables, to make key elements easy to locate.

Addressing these key characteristics will help position your proposal for success. The Stanford Research Development Office is here to work with you throughout the process, from developing a compelling narrative to ensuring alignment with funder priorities. Our team offers expert guidance, resources, and support to enhance the competitiveness of your proposal for external funding.

Contact us at  [email protected] to learn how we can help you.

Created: 02/06/24

Updated: 09/12/24

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Ten criteria for evaluating qualitative research proposals

  • PMID: 3035126
  • DOI: 10.3928/0148-4834-19870401-04

With the proliferation of interest in qualitative research in nursing comes the attendant problem of how to evaluate it appropriately. Qualitative research has its own unique history, philosophical foundations, and methodologies that separate it from the quantitative approach. Although the literature is crowded with guidelines for evaluating the latter, little is offered for the qualitative reviewer. The Research Proposal Evaluation Form: Qualitative Methodology is a partial solution to this dilemma. It provides a framework for critiquing the proposal phase of a qualitative study and can be an important guide both for the educator and for the novice researcher.

PubMed Disclaimer

Similar articles

  • Step-by-step guide to critiquing research. Part 2: Qualitative research. Ryan F, Coughlan M, Cronin P. Ryan F, et al. Br J Nurs. 2007 Jun 28-Jul 11;16(12):738-44. doi: 10.12968/bjon.2007.16.12.23726. Br J Nurs. 2007. PMID: 17851363 Review.
  • The qualitative research proposal. Klopper H. Klopper H. Curationis. 2008 Dec;31(4):62-72. doi: 10.4102/curationis.v31i4.1062. Curationis. 2008. PMID: 19653539 Review.
  • A review committee's guide for evaluating qualitative proposals. Morse JM. Morse JM. Qual Health Res. 2003 Jul;13(6):833-51. doi: 10.1177/1049732303013006005. Qual Health Res. 2003. PMID: 12891717
  • Evaluating quantitative research reports. Russell CL. Russell CL. Nephrol Nurs J. 2005 Jan-Feb;32(1):61-4. Nephrol Nurs J. 2005. PMID: 15787085 Review.
  • Interpretive phenomenological methodologists in nursing: A critical analysis and comparison. Burns M, Peacock S. Burns M, et al. Nurs Inq. 2019 Apr;26(2):e12280. doi: 10.1111/nin.12280. Epub 2019 Jan 22. Nurs Inq. 2019. PMID: 30666788
  • A Review of Strategies for Enhancing Clarity and Reader Accessibility of Qualitative Research Results. O'Sullivan TA, Jefferson CG. O'Sullivan TA, et al. Am J Pharm Educ. 2020 Jan;84(1):7124. doi: 10.5688/ajpe7124. Am J Pharm Educ. 2020. PMID: 32292189 Free PMC article. Review.
  • Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Munthe-Kaas HM, et al. BMC Med Res Methodol. 2019 Jun 4;19(1):113. doi: 10.1186/s12874-019-0728-6. BMC Med Res Methodol. 2019. PMID: 31164084 Free PMC article.
  • Exercise interventions and patient beliefs for people with hip, knee or hip and knee osteoarthritis: a mixed methods review. Hurley M, Dickson K, Hallett R, Grant R, Hauari H, Walsh N, Stansfield C, Oliver S. Hurley M, et al. Cochrane Database Syst Rev. 2018 Apr 17;4(4):CD010842. doi: 10.1002/14651858.CD010842.pub2. Cochrane Database Syst Rev. 2018. PMID: 29664187 Free PMC article. Review.
  • Negotiating Peril: The Lived Experience of Rural, Low-Income Women Exposed to IPV During Pregnancy and Postpartum. Burnett C, Schminkey D, Milburn J, Kastello J, Bullock L, Campbell J, Sharps P. Burnett C, et al. Violence Against Women. 2016 Jul;22(8):943-65. doi: 10.1177/1077801215614972. Epub 2015 Nov 26. Violence Against Women. 2016. PMID: 26612275 Free PMC article. Clinical Trial.
  • The Experience of Caregivers Living with Cancer Patients: A Systematic Review and Meta-Synthesis. LeSeure P, Chongkham-Ang S. LeSeure P, et al. J Pers Med. 2015 Nov 19;5(4):406-39. doi: 10.3390/jpm5040406. J Pers Med. 2015. PMID: 26610573 Free PMC article. Review.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 05 September 2024

Quantitative classification evaluation model for tight sandstone reservoirs based on machine learning

  • Xinglei Song 1 , 2 ,
  • Congjun Feng 1 , 2 ,
  • Teng Li 3 , 4 , 5 ,
  • Qin Zhang 6 ,
  • Xinhui Pan 1 , 2 ,
  • Mengsi Sun 7 &
  • Yanlong Ge 1 , 2  

Scientific Reports volume  14 , Article number:  20712 ( 2024 ) Cite this article

21 Accesses

Metrics details

Tight sandstone reservoirs are a primary focus of research on the geological exploration of petroleum. However, many reservoir classification criteria are of limited applicability due to the inherent strong heterogeneity and complex micropore structure of tight sandstone reservoirs. This investigation focused on the Chang 8 tight reservoir situated in the Jiyuan region of the Ordos Basin. High-pressure mercury intrusion experiments, casting thin sections, and scanning electron microscopy experiments were conducted. Image recognition technology was used to extract the pore shape parameters of each sample. Based on the above, through grey relational analysis (GRA), analytic hierarchy process (AHP), entropy weight method (EWM) and comprehensive weight method, the relationship index Q1 between initial productivity and high pressure mercury injection parameters and the relationship index Q2 between initial productivity and pore shape parameters are obtained by fitting. Then a dual-coupled comprehensive quantitative classification prediction model for tight sandstone reservoirs was developed based on pore structure and shape parameters. A quantitative classification study was conducted on the target reservoir, analyzing the correlation between reservoir quality and pore structure and shape parameters, leading to the proposal of favourable exploration areas. The research results showed that when Q1 ≥ 0.5 and Q2 ≥ 0.5, the reservoir was classified as type I. When Q1 > 0.7 and Q2 > 0.57, it was classified as type I 1 , indicating a high-yield reservoir. When 0.32 < Q1 < 0.47 and 0.44 < Q2 < 0.56, was classified as type II. When 0.1 < Q1 < 0.32 and 0.3 < Q2 < 0.44, it was classified as type III. Type I reservoirs exhibit a zigzag pattern in the northwest part of the study area. Thus, the northwest should be prioritized in actual exploration and development. Additionally, the initial productivity of tight sandstone reservoirs showed a positive correlation with the porosity, permeability, sorting coefficient, coefficient of variation, and median radius. Conversely, it demonstrated a negative correlation with the median pressure and displacement pressure. The perimeters of pores, their circularity, and the length of the major axis showed a positive correlation with the porosity, permeability, sorting coefficient, coefficient of variation, and median radius. On the other hand, they exhibited a negative correlation with the median pressure and displacement pressure. This study quantitatively constructed a new classification and evaluation system for tight sandstone reservoirs from the perspective of microscopic pore structure, achieving an overall model accuracy of 93.3%. This model effectively predicts and evaluates tight sandstone reservoirs. It provides new guidance for identifying favorable areas in the study region and other tight sandstone reservoirs.

Similar content being viewed by others

criteria for evaluating a research proposal

A case study of petrophysical rock typing and permeability prediction using machine learning in a heterogenous carbonate reservoir in Iran

criteria for evaluating a research proposal

Hierarchical automated machine learning (AutoML) for advanced unconventional reservoir characterization

criteria for evaluating a research proposal

Permeability modelling in a highly heterogeneous tight carbonate reservoir using comparative evaluating learning-based and fitting-based approaches

Introduction.

With the depletion of conventional oil and gas reservoirs, tight oil reservoirs have gradually become a hot topic and a focal point for exploration and development, both domestically and internationally 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 . However, tight sandstone oil reservoirs exhibit complex reservoir characteristics, primarily manifested in their deep burial depths, wide distribution, and complex depositional processes. The reservoirs exhibit characteristics of low porosity, poor permeability, and high heterogeneity. The dominant pores are micro- and nano-scale, with narrow and dispersed throats, and are unfavorable for the migration and accumulation of oil and gas 10 , 11 , 12 , 13 , 14 , 15 . These factors necessitate considering the interdependent influences of multiple factors when classifying and evaluating tight sandstone reservoirs, which affects the accuracy of reservoir evaluation and hinders the selection of high-quality reservoirs. Therefore, the rapid and effective classification and evaluation of tight sandstone reservoirs has long been a focal point of scholarly research.

The quality of the reservoir is a key factor that determines the oil and gas production capacity. The classification and evaluation of reservoirs are central to reservoir studies and play a significant role in oilfield development. With the continuous advancement of oilfield development technologies, reservoir classification and evaluation methods have become increasingly diverse, gradually evolving from qualitative to quantitative research and from macro-parameter to micro-parameter evaluation. At present, both domestic and international scholars classify reservoirs using two main methods. The first is the traditional classification and evaluation method, which directly uses indicators such as the lithology, physical properties, pore structure, sedimentary facies, and oil and production experiments for classification. For example, Wei et al. classified the tight sandstone reservoirs of the Sha Creek Formation in the central Sichuan Basin based on the transverse relaxation (T 2 ) distribution of nuclear magnetic resonance 16 . Xu et al. studied the characteristics and controlling factors of tight sandstone using thin-section casting, scanning electron microscopy, X-ray diffraction (XRD), and spontaneous imbibition experiments 17 . Wu et al. analyzed the logging response characteristics using core data and electric imaging logging data and identified the reservoir type with the highest industrial production in the study area 18 . Zhang et al. established classification criteria for the third member of the Quan Formation based on mercury injection curves, core physical properties, and sedimentary facies characteristics 19 . Talib et al. quantitatively characterized tight oil and gas reservoirs through rock physics experiments and seismic inversion profiles 20 .

The second approach to reservoir classification involves initially choosing evaluation parameters that align with the geological conditions of the target area. Subsequently, machine learning techniques such as GRA the AHP, the EWM, and fuzzy analysis are employed to assign weight coefficients to each evaluation parameter. Finally, the reservoir is comprehensively scored. For example, Fang et al. proposed an automatic classification and verification method for reservoir types based on k-means clustering and Bayesian discriminant theory, using core logging and logging data from coring wells, combined with physical characteristics such as reservoir deposition and diagenesis 21 . Li et al. classified the Fuyu reservoir using GRA, Q clustering analysis, and discriminant analysis 22 . Wang et al.combined AHP and EWM, used the multi-factor superposition method, and established a new reservoir classification and evaluation method 23 . Fan et al. quantified the weight of evaluation parameters’ contribution to production by combining the relationships between variables and directional good production using the GRA 24 . Niu et al. proposed a new machine learning framework (GCA-CE-MGPK) for shale reservoirs, achieving efficient and accurate multi-scale evaluation of shale reservoirs 25 . In summary, traditional classification and evaluation methods are costly, inefficient and require extensive experimental data. They are mainly suitable for specific regions, making them inadequate for large-scale reservoir evaluation and prediction. Although machine learning techniques can improve efficiency and reduce costs, their accuracy often depends on the optimization of various mathematical methods, leading to high subjectivity in some models and lower overall precision, failing to meet the practical needs of production. Moreover, previous studies have primarily focused on evaluating single factors, lacking the integration of macro and micro perspectives. Based on these, this study combined multiple machine learning methods to directly link actual oilfield production data with micro-scale pore shape and structure parameters, effectively integrating macro and micro parameters.

Given the significant influence of subjective factors on the classification criteria for the quantitative evaluation of conventional reservoirs, adopting a new method for reservoir evaluation is essential. This study focuses on the Chang 8 tight sandstone reservoir in the Jiyuan area of the Ordos Basin, extracting pore shape parameters from 52 rock samples. Combined with the experimental data of high pressure mercury injection and the actual initial production capacity of the oil field. Through GRA, AHP, EWM and comprehensive weight method, the relationship index Q1 between initial productivity and high pressure mercury injection parameters and the relationship index Q2 between initial productivity and pore shape parameters are obtained by fitting. Then a dual-coupled comprehensive quantitative classification prediction model for tight sandstone reservoirs was developed based on pore structure and shape parameters. A quantitative classification study was conducted on the target reservoir, analyzing the correlation between reservoir quality and pore structure and shape parameters, leading to the proposal of favourable exploration areas. This method effectively combined the subjectivity-influenced AHP with the objectivity-influenced EWM to calculate the comprehensive weight coefficient, mitigating the impact of subjective factors and enhancing the model's accuracy. Validation results indicate that the model has an overall accuracy of 93.3%. Therefore, it was an effective tool for predicting and classifying tight sandstone reservoirs. It is significant for further exploration in the study area and other similar reservoirs.

Geological setting

The Ordos Basin is a large, multi-cycle, cratonic basin that formed on the crystalline basement during the Paleoproterozoic–Mesoproterozoic. The Ordos Basin, the second-largest sedimentary basin in China, has experienced five significant stages of sedimentary evolution. These stages include the middle to late Proterozoic rift valley, the early Paleozoic shallow marine platform, the late Paleozoic nearshore plain, the Mesozoic inland lake basin, and Cenozoic peripheral subsidence. This basin is known for its substantial reserves of oil and gas. The Ordos Basin extends across five provinces and regions, namely, Shaanxi, Gansu, Shanxi, Ningxia, and Inner Mongolia. Geographically, it stretches from the Yin Mountains in the north to the Qinling Mountains in the south, and from the Liupan Mountains in the west to the Lvliang Mountains in the east. The basin’s total area is 25 × 10 4 km 2 , with favorable areas covering 9.9 × 10 4 km 2 . The estimated resource volume is 6.2 × 10 12 m 3 , indicating significant exploration and development potential. Based on the basin’s geological nature, tectonic evolution, and structural pattern, the Ordos Basin can be divided into six primary tectonic units: the northern Shaanxi slope, the Tianhuan Depression, the western thrust fault zone, the Yimeng Uplift, the Weihebei Uplift, and the western Shanxi fold belt. The Jiyuan area, located in the central-western part of the Ordos Basin, covers a total area of 1302 km 2 (Fig. 1 a, c). This area spans the two primary tectonic units of the northern Shaanxi slope and the Tianhuan Depression, exhibiting a gently inclined monocline structure towards the west. Since the Mesozoic, the basin has developed thick fluvio-lacustrine deposits. In the Cenozoic, rift valleys were formed around the basin due to fault subsidence. The overall geological conditions are relatively complex, posing challenges for exploration. However, the area is rich in oil and gas resources, indicating favourable exploration prospects 26 , 27 , 28 , 29 . The proven petroleum geological reserves in this area amount to 800 × 10 6 t, with annual crude oil production of 700 × 10 4 t, making it the oilfield with the largest reserves and production levels in the Ordos Basin from the Mesozoic. Existing exploration results indicate that the Chang 8 oil-bearing formation is one of the most favourable hydrocarbon accumulation zones in the Jiyuan area, with a proven favourable oil-bearing area of 1500 km 2 .

figure 1

( a ) Location of the study area(modified from Tong 29 ), ( b ) columnar diagram of the Chang 8 formation, ( c ) well location distribution map of the study area.

The Chang 8 reservoir is located in the lower part of the Upper Triassic Yan’an Formation. It is primarily composed of grey sandstone and dark black mudstone interbeds. These sedimentary microfacies are predominantly characterized by subaqueous distributary channels and underwater distributary bays, indicating a deposition pattern typical of a shallow-water deltaic environment (Fig.  1 b). Based on the thin-section identification of the study area (Fig.  2 ), the lithology of the Chang 8 reservoir is predominantly composed of fine-grained feldspathic sandstone, feldspathic lithic sandstone, and a small amount of feldspar sandstone. The detrital components in the study area mainly consist of quartz, feldspar, and detritus. The ranges of contents are as follows: the quartz content is 20.1% to 58.6%, with an average of 31.21%; the feldspar content is 23.56% to 57.62%, with an average of 34.43%; and the detritus content is 6.25% to 29.45%, with an average of 21.38%.

figure 2

Triangular diagram and detrital composition diagram of the study area. ( a ) Triangular classification diagram of the sandstone in the Chang 8 reservoir, ( b ) histogram of the relative content of detrital components in the Chang 8 reservoir.

Materials and statistical methods

Materials and experiments.

In this study, 52 drilling core samples were obtained from the Chang 8 reservoir in Jiyuan, Ordos Basin, with all samples exhibiting a fine sandstone lithology. The samples underwent oil washing, gas permeability measurements, and the weight method for porosity calculation, allowing the determination of the reservoir’s petrophysical parameters (Table 1 ). The samples' average porosity was 8.23%, between 2.41 and 13.6%. The average permeability was 0.18 × 10 –3 µm 2 , ranging between 0.01 × 10 –3 µm 2 and 1.10 × 10 –3 µm 2 . Subsequently, thin-section casting and scanning electron microscopy experiments were conducted, resulting in 300 photographs. Additionally, high-pressure mercury intrusion was performed on the 52 samples to obtain the micropore throat characteristic parameters.

High pressure mercury intrusion and scanning electron microscopy

High pressure mercury intrusion experiment was used to evaluate the micropore throat characteristics of reservoirs quantitatively. This is achieved by observing the pressure changes during mercury injection into the pores, analyzing the characteristics of the capillary pressure curves, and studying the relationship between the intrusion volume of mercury and these characteristics 30 , 31 . In this experiment, the Auto Pore IV 9530 fully automated mercury porosimeter was utilized, with a pore diameter measurement range of 3 nm to 1100 μm. Continuous mercury injection was employed, with volume accuracy of less than 0.1 μl for both injection and withdrawal. The experimental procedure followed the national standard GB/T29171-2012, and the maximum mercury injection pressure reached 95.39 MPa.

Scanning electron microscopy (SEM) allows for high-resolution morphological observation and analysis of samples, as well as structural and compositional characterization. It also enables direct observation of the development characteristics of the micro-pore throats in the reservoir 32 , 33 , 34 . The experiment employed the Japanese Electron JSM-7500F field emission scanning electron microscope, which achieves a secondary-electron image resolution of 1 nm and magnification ranging from 20 to 300,000 times.

Pore parameter extraction technology

The ImageJ software, initially developed by Wayne Rasband at the National Institutes of Health in the United States, is a powerful open-source image processing system written in Java. It was initially applied in the fields of biomedical and agricultural sciences 35 . Recently, an increasing number of scholars have used it to identify and extract reservoir pores and fracture features 36 , 37 , 38 , 39 . In this study, the ImageJ software was used to process 210 scanning electron microscope images, extracting various pore parameters, including the perimeter, circularity, major axis length, aspect ratio, and solidity.

Statistical methodology

GRA is to address infinite space problems using finite sequences. It aims to evaluate the correlations between various factors within a system and determine the significance of each factor to the target function. This approach helps to avoid the subjective process of manually assigning weights to factor indicators 40 . In recent years, GRA has been applied in production forecasting and development plan optimization for tight sandstone reservoirs 41 , 42 , 43 , 44 . The specific steps are as follows.

Determine the initial sequence:

where X 0 is the reference sequence, X i is the comparative sequence, i is the number of comparative sequences, m is the number of independent variables, and n is the number of samples.

Normalize the data using the extreme value method:

Calculate the gray correlation coefficient:

Obtain the gray correlation coefficient matrix:

where ρ is the resolution coefficient, which takes values between 0 and 1. A smaller resolution coefficient indicates greater differences between the correlation coefficients and stronger discriminatory power. Usually, ρ is set to 0.5.

Determine the correlation degree. Represent the correlation strength between the series using the average of the n correlation coefficients:

where \(\mathop \varepsilon \nolimits_{{\mathop o\nolimits_{i} }}\) represents the correlation degree between the i -th comparative sequence and the reference sequence.

Determine the weights and rank the correlation degrees. Normalize the correlation degrees to obtain the weight W i of each comparative sequence:

AHP is a methodology that categorizes the factors within a complex problem into interconnected and prioritized levels. This approach facilitates the process of making decisions based on multiple criteria. It is primarily used to determine the weighting coefficients for comprehensive evaluations 45 , 46 , 47 . The process is as follows.

Construction of a judgment matrix: a judgment matrix is constructed to compare the importance of different factors:

where A is the matrix of pairwise comparisons, W is the weight vector, and λ max is the maximum eigenvalue.

Calculation of weights: the weight vector W is determined using the sum-product method.

Consistency check:

where n is the number of elements, I c is the consistency index, I R is the random consistency index, I cR is the consistency ratio, and \(\lambda^{\prime } \max\) is the average of the maximum eigenvalues.

If I cR  < 0.10, the consistency of the judgment matrix is considered acceptable.

EWM is an objective weighting approach that comprehensively examines the underlying patterns and informational value of unprocessed data. It can determine the uncertainty in variables through entropy values, where larger information content corresponds to smaller uncertainty and smaller entropy, and vice versa. The entropy weighting method is characterized by high accuracy and strong objectivity, and many scholars have applied it to oilfield production with good results 48 , 49 . The basic steps are as follows.

Normalize the data and calculate the information entropy:

where E i is the information entropy of the i th indicator, X ij is the value of the i th indicator on the j th sample, and N is the number of samples.

Calculate the weights:

where W i is the weight of the i th indicator, E i is the information entropy of the i th indicator, and M is the number of indicators.

Comprehensive weight coefficient

Weight coefficients can be used to classify and evaluate the reservoir quality effectively, and several methods are currently available to determine the weight coefficients. These include GRA, the expert evaluation method, Q clustering analysis and discriminant analysis, and factor analysis 50 , 51 , 52 . In this research, a comprehensive weight analysis methodology that integrated AHP and EWM was employed. The key advantage of this approach lies in its amalgamation of the subjective AHP analysis and the objective numerical analysis of EWM. This combination helps to mitigate the influence of subjective factors to a certain extent, thereby enhancing the reliability of the data.

where W iAHP is the weight coefficient obtained from the AHP method, and W iEWM is the weight coefficient obtained from the EWM method.

Results and discussion

Evaluation parameter selection.

Tight sandstone reservoirs are influenced by deposition, tectonics, and diagenesis.. These reservoirs demonstrate significant heterogeneity and an intricate distribution of micropore throats. The pore structure plays a crucial role in governing the storage and flow behaviour of the reservoir, where the different shape parameters of the pores govern the micropore structure of the rock formation 53 , 54 , 55 , 56 , 57 . Considering the characteristics above, this study aimed to provide a quantitative characterization of the reservoir by assessing three key aspects: the pore structure, the physical properties, and the pore shape parameters. Twelve parameters were selected to establish the relationship between the initial production capacity index and the pore structure and shape parameters. The actual initial production capacity of the oilfield was used as the indicator.

Sensitivity parameter selection for pore structure characteristics

The selected 52 samples were subjected to high-pressure mercury intrusion experiments using an Auto Pore IV 9530 automatic mercury porosimeter. The sorting coefficient varied between 1.5 and 2.74, with an average of 2.10. The coefficient of variation ranged between 13.94 and 17.32, with a mean value of 15.54. With an average value of 13.86 MPa, the median pressure varied between 10.5 and 18.79 MPa. The average displacement pressure was 1.23 MPa, ranging between 0.09 and 2.57 MPa. The median radius had a mean value of 0.09 μm and varied from 0.05 to 0.15 μm. With a mean value of 84.52%, the maximum mercury saturation varied from 62.77 to 93.76%. With an average of 34.90%, the mercury withdrawal efficiency varied between 16.7 and 46.6%. Overall, the pore structure of the reservoir in the study area was poor, with uneven sorting and poor connectivity among the pore throats, indicating strong heterogeneity. Correlation analysis was conducted on the initial production and mercury intrusion parameters (Fig.  3 ), and it was found that the correlation between the production capacity and permeability and porosity was the strongest, with correlation coefficients (R 2 ) of 0.91 and 0.75, respectively. This is mainly because porosity plays a crucial role in determining the size of the pore space within a reservoir, while permeability governs its flow capacity. In the context of tight sandstone reservoirs, the reservoir quality often depends on favourable pore permeability. The sorting coefficient and coefficient of variation provide insights into the uniformity of the distribution of the pore throat sizes. Higher values of these parameters indicate an improved pore structure and increased reservoir productivity. The median radius and median pressure indicate the pore permeability of the reservoir. A larger median radius and smaller median pressure indicate a larger pore space and stronger flow capacity, resulting in a larger oil production capacity. Therefore, the median radius positively correlates with production, while the median pressure is inversely correlated. The displacement pressure is inversely correlated with production (R 2  = 0.65). This is because displacement pressure refers to the capillary pressure corresponding to the largest connected pore, and a higher displacement pressure means a higher capillary pressure, making it more difficult for fluid to flow through. This indicates that tight oil has poor flow capacity in the reservoir and is more difficult to accumulate and extract. In conclusion, the initial production capacity is sensitive to the porosity, permeability, sorting coefficient, coefficient of variation, median pressure, median radius, and displacement pressure.

figure 3

Relationship between initial production and porosity, permeability, selectivity coefficient, coefficient of variation, median pressure, median radius, and displacement pressure.

Selection of pore-shape-sensitive parameters

A total of 210 high-resolution SEM images were captured for the 52 samples. The rock core pores were identified and extracted using ImageJ, obtaining pore shape parameters such as the perimeter, circularity, major axis length, aspect ratio, and solidity (Fig.  4 , Table 2 ). The average values of the identified pore shape parameters for each sample were then calculated. It was found that the pore perimeters of the 52 samples varied between 40.3 and 486.2 μm, with a mean value of 250.5 μm. The circularity ranged between 0.11 and 0.96, with a mean value of 0.31. The major axis lengths of the circumscribed ellipses spanned from 42.52 to 221.19 μm, with an average of 111.67 μm. The aspect ratios ranged from 1.14 to 2.92, and the average value was 2.32. The solidity values ranged between 0.09 and 0.89, with an average of 0.67. In general, the pore shape parameters of the tight sandstone reservoirs exhibited a wide range of variation, with relatively large average perimeters, average major axis lengths of the circumscribed ellipses, aspect ratios, and solidity, and with small average circularity (Fig.  5 ). This indicates that the pore shapes in tight sandstone are diverse, predominantly irregular and elongated, with few circular pores. Pearson correlation analysis was conducted between the most sensitive parameters for the prioritized pore structure characteristics and the extracted pore shape parameters (Fig.  6 ). The absolute value of the correlation coefficient always lies between −1 and 1. In this context, a value closer to 1 indicates a stronger positive relationship between the two independent variables, a value closer to -1 indicates a stronger negative relationship between the independent variables, and a value closer to 0 indicates a weak relationship between the variables. A significant and strong correlation (R 2  > 0.5) observed between the different shape parameters of the pores and the mercury injection parameters. This suggests that the shape parameters of the pores play a crucial role in determining the pore structures of tight sandstone reservoirs. In general, the perimeter, circularity, and major axis length of the pores displayed a positive correlation with the porosity (Φ), permeability (K), sorting coefficient (S p ), coefficient of variation (D r ), and median radius (R50). Conversely, they exhibited a negative correlation with the median pressure (P 50 ) and displacement pressure (Pd). On the other hand, the aspect ratio and solidity of the pores were inversely proportional to the porosity, permeability, sorting coefficient, coefficient of variation, and median radius. However, they were positively correlated with the median pressure and displacement pressure. Among them, there was a strong positive correlation (R 2  = 0.914) between the perimeter and porosity and a relatively strong negative correlation (R 2  = –0.766) with the displacement pressure. A larger pore perimeter results in a greater contact area between the reservoir fluid and the solid, facilitating fluid infiltration and storage. Circularity was strongly positively correlated with permeability (R 2  = 0.927) and negatively correlated with the displacement pressure (R 2  = –0.604). This is because larger circularity indicates a closer approximation to circular pores, which typically exhibit a uniform distribution, resulting in improved connectivity and fluid flow. The major axis length was strongly positively correlated with the permeability and porosity because the major axis length of the circumscribed ellipses of pores affects the connectivity and fluid flow path within the pores. A larger major axis length indicates better connectivity between pores, resulting in a more direct fluid flow path and higher permeability. Moreover, a longer major axis length corresponds to a larger pore size and higher porosity. The aspect ratio exhibited a strong negative correlation with the permeability and selectivity coefficient (R 2  = –0.866, R 2  = –0.754, respectively) and a strong positive correlation with the displacement pressure (R 2  = 0.652). As the aspect ratio increases, the pores become narrower and more uneven, resulting in longer and narrower flow channels, making fluid flow more difficult. As a result, the displacement pressure increases, the selectivity coefficient decreases, and the permeability decreases. Solidity exhibited a strong negative correlation with permeability (R 2  = –0.862) and a positive correlation with the displacement pressure (R 2  = 0.574). As the solidity increases, the pore shape becomes more concave, and the roundness deteriorates, making fluid flow between the pores more difficult. In conclusion, it can be observed that the perimeter, circularity, major axis of the circumscribed ellipse, aspect ratio, and solidity of the pores are sensitive to various parameters of mercury intrusion.

figure 4

Visualization of pore extraction results for rock samples. ( A ) Pore identification (sample no. 1), ( B ) pore extraction (sample no. 1), ( C ) pore identification (sample no. 10), ( D ) pore extraction (sample no. 10), ( E ) pore identification (sample no. 25), ( F ) pore extraction (sample no. 25).

figure 5

Distribution of pore shape parameters. ( a ) Distribution range of pore perimeter and major axis, ( b ) distribution range of pore circularity, solidity, and aspect ratio.

figure 6

Correlations between pore structural parameters and pore shape parameters.

Reservoir classification evaluation

Quantitative classification prediction formula.

Based on the results of the GRA, AHP, and EWM, a comprehensive quantitative classification prediction formula was constructed using the superposition principle. This formula was then used to classify and evaluate tight sandstone reservoirs.

where Q is the productivity index, a i is the dimensionless weight coefficients of various parameters, b i,N is the dimensionless normalized parameters, and n is the number of parameters.

Determination of weight coefficients

In this study, the initial production rate directly reflecting the reservoir quality was taken as the fundamental sequence. Seven sensitive parameters, namely, the porosity, permeability, sorting coefficient, coefficient of variation, median pressure, median radius, and displacement pressure, were considered as sub-sequences. The principles and steps of GRA were employed to determine the weights of various parameters, thereby assessing the sensitivity of each factor to the initial production rate (Table 3 ). Combining the correlation degree between the sensitive parameters determined by the gray correlation method and the initial productivity. Then, the parameters were compared in pairs, and values were assigned based on the 9-point scale method. The judgment matrix was obtained by pairwise comparisons of the seven sensitive parameters (Table 4 ). Subsequently, the weight coefficients were determined using the weighted product method within the AHP (Table 5 ). Formula ( 14 ) shows that the judgment matrix I cR  = 0.093 is less than 0.1, meeting the consistency requirements. Subsequently, the EWM analysis method was employed to conduct an objective analysis of each sensitive parameter, resulting in objective weight indices. The comprehensive weight coefficients were calculated using Eq. ( 17 ) (Table 5 ). The formula for the initial productivity and the mercury intrusion sensitivity parameter can be obtained as follows:

where Φ N is the normalized porosity, K N is the normalized permeability, S P,N is the normalized sorting coefficient, Dr, N is the normalized coefficient of variation, P 50,N is the normalized median pressure, R 50,N is the normalized median radius, and P d,N is the normalized displacement pressure.

Then, using the mercury intrusion parameter as the fundamental sequence, five sensitive parameters related to the pore shape, namely, the perimeter, circularity, major axis length, aspect ratio, and solidity, were considered sub-sequences. The correlation between the mercury intrusion parameters and the pore-shape-sensitive parameters was determined using GRA. The comprehensive weight coefficients for each mercury intrusion parameter were calculated using a combination of the AHP and the EWM (Table 6 ). Based on these weight coefficients, the correlation formulas between each mercury intrusion parameter and the pore shape parameters were obtained as follows:

Combined with Formula ( 19 ), the relationship between the initial productivity and pore shape parameters can be obtained:

where P N is the normalized perimeter, C N is the normalized circularity, M N is the normalized major axis, A N is the normalized aspect ratio, and S N is the normalized solidity.

Classification scheme and feature evaluation

Based on the indices Q1, which relate initial productivity to high-pressure mercury intrusion sensitivity parameters, and Q2, which relate initial productivity to pore shape parameters, a classification and evaluation scheme for the Chang 8 tight sandstone reservoir have been determined. As depicted in Fig.  7 , Q1 for type III reservoirs ranges from 0.1 to 0.31, and Q2 ranges between 0.3 and 0.44. For type II reservoirs, Q1 ranges from 0.32 to 0.47, and Q2 ranges from 0.44 to 0.56. For type I reservoirs, Q1 ≥ 0.5 and Q2 ≥ 0.5. Moreover, type I reservoirs can be further divided into type I 1 , comprising high-yield reservoirs, and type I 2 , comprising high-quality reservoirs, with Q1 > 0.7 and Q2 > 0.57 indicating type I 1 high-yield reservoirs. Type I reservoirs are considered optimal for the Chang 8 formation, with 15 out of 52 samples belonging to this type, accounting for 28.8%. The characteristics associated with this type of reservoir include favourable pore permeability, featuring an average porosity of 11.1% and permeability of 0.4 × 10 –3  µm 2 . Additionally, these reservoirs possess a low displacement pressure of 0.62 MPa, a low median pressure of 11.79 MPa, and a relatively high median radius of 0.12 µm. The reservoir exhibits good pore throat selectivity, characterized by a large sorting coefficient (2.5) and variation coefficient (16.43). The average pore perimeter of the reservoir is relatively long (360.30 µm), with good circularity (0.50) and a small aspect ratio (1.92). This indicates that the pore shape is more regular and almost circular. Generally, type II displays moderate petrophysical characteristics, characterized by an average porosity of 8.43% and permeability of 0.1 × 10 –3 µm 2 . Within this classification, 19 samples contribute to 36.54% of the dataset. Compared to type I, this reservoir type has a somewhat higher average displacement pressure and median pressure (1.11 MPa and 13.48 MPa, respectively). The median radius is lower (0.10 µm), and the average sorting coefficient and coefficient of variation are 2.41 and 16.18, respectively, indicating moderate sorting. The average pore perimeter of this reservoir type is smaller than that of type I (261.61 µm), with smaller circularity (0.26) and a larger aspect ratio (2.41). Compared to type I, the pores of type II reservoirs exhibit irregular and more elongated shapes. Type III exhibits poorer petrophysical properties, with an average permeability of 0.06 × 10 –3 μm 2 and porosity of 5.7%, significantly lower than those of type I and type II. There were 18 samples belonging to this type, accounting for 34.62%. This reservoir type has an average displacement pressure of 1.89 MPa and a median pressure of 16.1 MPa, greater than type II. The median radius is the smallest (0.07 µm). The average sorting coefficient and coefficient of variation are 1.81 and 14.7, respectively, indicating poor pore throat sorting. The average pore perimeter is the smallest (147.37 µm), with the poorest circularity (0.19) and the largest aspect ratio (2.56). This indicates that the pores of type III reservoirs are more elongated and slender, making them unfavorable for fluid flow and leading to poor reservoir permeability. In summary, it can be observed that as the reservoir quality deteriorates, the pore structure becomes increasingly worse, and the pore shapes become more complex and variable.

figure 7

Comprehensive quantitative classification prediction model for the research area of the Chang 8 reservoir.

According to the distribution maps of the well locations and sedimentary microfacies (Figs.  1 c, 8 ), it is observed that type I reservoir wells are mostly found in the northwest of the research region, within the subaqueous distributary channels, exhibiting a zigzag pattern. Most type II reservoir wells are located in the study area's centre, mainly within the middle portions of the subaqueous distributary channel's lateral sand bodies. On the other hand, the relatively poor type III reservoir wells are scattered around the type II reservoirs, with most of them located in the marginal areas adjacent to the interdistributary bay and the edge of the channel’s lateral sand bodies. Therefore, in practical exploration and development, the high-quality reservoirs (type I) in the study area's northwest part should be prioritised.

figure 8

Planar distribution map of comprehensive quantitative classification for the research area of the Chang 8 reservoir.

Additionally, the main reason for the high productivity of type I 1 reservoirs is the higher content of dissolved pores in type I reservoirs. According to Table 7 and Fig.  9 , samples 3, 15, 16, and 20 from type I reservoirs exhibit significant development of feldspar dissolution pores, intergranular pores, and a small number of rock particles that dissolve pores. The average absolute contents of feldspar dissolution and intergranular pores are 1.2% and 5.15%, respectively. The average face rate is 0.8%, higher than the other samples. The greater the development of feldspar dissolution and intergranular pores, the larger the flow channels and storage space they provide, thus improving the reservoir’s porosity and permeability, resulting in high-productivity reservoirs. The pore shape parameters of samples 3, 15, 16, and 20 were compared with those of the other samples (Table 2 ). It was found that these four samples have longer pore perimeters and major axes, larger shape factor (roundness) coefficients, and relatively smaller aspect ratios and concavity. This indicates that high-productivity reservoirs (type I 1 ) have larger pore perimeters, an increased contact area between the pores and reservoir fluids, higher pore circularity, and more circular shapes favourable for fluid flow and storage. Furthermore, as shown in Fig.  8 , the four high-productivity wells (JY-3, JY-15, JY-16, JY-20) are all located on the main channel of the subaqueous distributary channel. Therefore, from a macro perspective, thicker sand bodies may be another reason for their high productivity.

figure 9

Porosity structure of type I 1 reservoir. ( A ) Intergranular pores, developed dissolution pores (sample no. 3), ( B ) feldspar dissolution pores (sample no. 20), ( C ) rock fragment dissolution pores (sample no. 15), ( D ) intergranular pores, locally developed dissolution pores (sample no. 16).

Model validation

In order to verify the model, 15 coring wells in Jiyuan Chang 8 reservoir were selected. High-pressure mercury intrusion tests, scanning electron microscopy, and thin-section casting experiments were conducted on corresponding samples to extract the pore shape parameters. Next, the comprehensive indices Q1 and Q2, for reservoir categorization, were determined using the GRA, the AHP, and the EWM. Finally, the accuracy of the classification results was compared with that of the existing oil test parameters. As shown in Fig.  10 , three wells were classified as type I reservoirs, with an average initial yield of 5.73 t/d. Six wells were classified as type II reservoirs, with an average initial yield lower than type I at 3.52 t/d. One well was misclassified, deviating from the expected value. Five wells were classified as type III reservoirs, with the lowest average initial yield of 1.32 t/d. The quantitative evaluation of the comprehensive parameters matched the actual production capacity results, demonstrating a high matching rate of 93.3%. Compared to conventional models by other scholars for tight sandstone reservoirs, this model establishes a direct connection between actual oilfield production data, microscale pore shape parameters, and pore structure parameters, leading to quantitative reservoir classification evaluation 58 , 59 , 60 . It demonstrates higher and more stable classification accuracy.

figure 10

Comparative analysis of the integrated quantitative classification prediction for the Chang 8 reservoir.

Conclusions

Tight sandstone reservoirs display significant heterogeneity and intricate microscopic pore structures, which impact the accuracy of reservoir assessment. This study employed scanning electron microscopy, thin section analysis, and high-pressure mercury intrusion data as samples. It utilized image recognition technology and machine learning methods to develop a novel classification and evaluation system for tight sandstone reservoirs based on microscopic pore structures. This method utilizes minimal experimental data, is cost-effective, demonstrates relatively high model accuracy, and is particularly suitable for tight sandstone reservoirs. The research conclusions are as follows:

By analyzing high pressure mercury parameters, scanning electron microscopy images, and thin sections of the study area in the Chang 8 reservoir, a comprehensive quantitative classification prediction model for tight sandstone reservoirs was established. The model was constructed using twelve sensitive parameters: porosity, permeability, sorting coefficient, coefficient of variation, median pressure, median radius, displacement pressure, pore perimeter, circularity, major axis length, aspect ratio, and solidity, all extracted using image recognition technology.

The case study based on the comprehensive quantitative classification prediction model showed that Q1 ≥ 0.5 and Q2 ≥ 0.5 corresponded to type I reservoirs, while Q1 > 0.7 and Q2 > 0.57 corresponded to type I 1 high-yield reservoirs. When 0.32 < Q1 < 0.47 and 0.44 < Q2 < 0.56, a type II reservoir was identified. When 0.1 < Q1 < 0.32 and 0.3 < Q2 < 0.44, a type III reservoir was identified. Additionally, the presence of high-content dissolution pores, intergranular pores, and larger pore perimeters, as well as higher pore circularity, were the main factors contributing to high-yield reservoirs (type I 1 ). The model was validated, achieving an overall accuracy of 93.3%, which indicates its effectiveness in predicting the classification and evaluation of tight reservoirs.

Reservoir quality is influenced by the pore structure characteristics and shape parameters. In tight sandstone reservoirs, the productivity is positively correlated with the porosity, permeability, sorting coefficient, coefficient of variation, and median radius, but negatively correlated with the median pressure and displacement pressure. The perimeter, circularity, and major axis length of the pores are positively correlated with the porosity, permeability, sorting coefficient, coefficient of variation, and median radius, but negatively correlated with the median pressure and displacement pressure.

Type I reservoir wells were primarily found in the northwest of the research region, within the subaqueous distributary channels, exhibiting a zigzag pattern. The majority of type II reservoir wells were located in the study area's center, mostly within the middle portions of the subaqueous distributary channel’s lateral sand bodies. In contrast, the relatively inferior type III reservoir wells were dispersed among the type II reservoirs, primarily situated in the marginal zones bordering the interdistributary bay and the periphery of the channel’s lateral sand bodies. Therefore, in terms of practical exploration and development, priority should be given to the superior reservoirs (type I) in the northwestern sector of the research region.

The evaluation results of the quantitative classification of tight sandstone reservoirs using machine learning are generally consistent with previous multiparameter conventional evaluation studies. However, this approach effectively integrates macroscopic and microscopic parameters, resulting in higher model accuracy, easier operation, and lower costs. It is particularly suitable for large-scale quality assessments of tight sandstone reservoirs, offering essential guidance for further exploration in the study area and other similar reservoirs.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Abbreviations

Analytic hierarchy process

Grey relational analysis

Entropy weight method

X-ray diffraction

Scanning electron microscopy

Fine-grained lithic feldspar sandstone

Fine-grained feldspar lithic sandstone

Fine-grained feldspar sandstone

Grey correlation analysis, clustering ensemble, and the Kriging model combined with macro geological parameters

Ledingham, & Glen, W. Santigo, pool. California: Geological note. AAPG Bull. 31 (11), 2063–2067 (1947).

Google Scholar  

Wang, Q. R., Tao, S. Z. & Guan, P. Progress in research and exploration & development of shale oil in continental basins in China. Nat. Gas Geosci. 31 (3), 417–427 (2020).

Zou, C. N. et al. Geological concepts, characteristics, resource potential and key techniques of unconventional hydro-carbon: On unconventional petroleum geology. Pet. Explor. Dev. 40 (4), 385–399 (2013).

Article   Google Scholar  

Zou, C. N. et al. Progress in China’s unconventional oil & gas exploration and development and theoretical technologies. Acta Geol. Sin. 89 (6), 979–1007 (2015).

CAS   Google Scholar  

Zhou, N. et al. Limits and grading evaluation criteria of tight oil reservoirs in typical continental basins of China. Petrol. Explor. Dev. 48 (05), 1089–1100 (2021).

Zhao, W. et al. Types and resource potential of continental shale oil in China and its boundary with tight oil. Petrol. Explor. Dev. 47 (01), 1–11 (2020).

Article   CAS   Google Scholar  

Sun, L. et al. Development characteristics and orientation of tight oil and gas in China. Petrol. Explor. Dev. 46 (06), 1073–1087 (2019).

Xiang, F. et al. Classification evaluation criteria and exploration potential of tight oil resources in key basins of China. J. Nat. Gas Geosci. 4 (6), 309–319 (2019).

Gao, X., Chen, J., Xu, R., Zhen, Z., Zeng, X., Chen, X. & Cui, L. Research progress and prospect of the materials of bipolar plates for proton exchange membrane fuel cells (PEMFCs)[J]. International Journal of Hydrogen Energy . 50 , 711–743 (2024).

Article   ADS   CAS   Google Scholar  

Wang, J., Wu, S., Li, Q. & Guo, Q. An investigation into pore structure fractal characteristics in tight oil reservoirs: A case study of the Triassic tight sandstone with ultra-low permeability in the Ordos Basin, China. Arab. J. Geosci. 13 (18), 961 (2020).

Gao, H., Cao, J., Wang, C., He, M., Dou, L., Huang, X. & Li, T. Comprehensive characterization of pore and throat system for tight sandstone reservoirs and associated permeability determination method using SEM, rate controlled mercury and high pressure mercury. J. Petrol. Sci. Eng . 174 (2018).

Gao, H. et al. Effect of pressure pulse stimulation on imbibition displacement within a tight sandstone reservoir with local variations in porosity. Geoenergy Sci. Eng. 226 , 211811 (2023).

Wang, C., Gao, H., Gao, Y. & Fan, H. Influence of pressure on spontaneous imbibition in tight sandstone reservoirs. Energy Fuels 34 (8), 9275–9282 (2020).

Wang, C., Li, T., Gao, H., Zhao, J. & Gao, Y. Quantitative study on the blockage degree of pores due to asphaltene precipitation in low-permeability reservoirs with NMR technique. J. Petrol. Sci. Eng. 163 , 703–711 (2018).

Gao, H. et al. Effects of pore structure and salinity on the imbibition of shale samples using physical simulation and NMR technique: A case from Chang 7 shale, Ordos basin. Simulation. 97 (2), 167–173 (2021).

Wei, H. et al. Classification of tight sandstone reservoirs based on the nuclear magnetic resonance T 2 distribution: A case study on the Shaximiao Formation in Central Sichuan, China. Energy Fuels 36 , 10803–10812 (2022).

Xu, J. et al. Characteristics and controlling factors of tight gas sandstones from the Upper Shanxi and Lower Shihezi Formations in the Northern Sulige Area, Ordos Basin, China. Energy Fuels 37 (20), 15712–15729 (2023).

Wu, X. et al. A novel evaluation method of dolomite reservoir using electrical image logs: The Cambrian dolomites in Tarim Basin, China. Geoenergy Sci. Eng. 233 , 212509 (2024).

Zhang, Q. et al. Comprehensive evaluation and reservoir classification in the Quan 3 member of the Cretaceous Quantou Formation in the Fuxin Uplift, Songliao Basin. Front. Earth Sci. 10 , 1016924 (2022).

Article   ADS   Google Scholar  

Talib, M., Durrani, M. Z. A., Palekar, A. H., Sarosh, B. & Rahman, S. A. Quantitative characterization of unconventional (tight) hydrocarbon reservoir by integrating rock physics analysis and seismic inversion: A case study from the Lower Indus Basin of Pakistan. Acta Geophys. 70 (6), 2715–2731 (2022).

Fang, X., Zhu, G., Yang, Y., Li, F. & Feng, H. Quantitative method of classification and discrimination of a porous carbonate reservoir integrating k-means clustering and Bayesian theory. Acta Geol. Sin. (Beijing) 97 (1), 176–189 (2023).

Li, Y. et al. Microscopic pore-throat grading evaluation in a tight oil reservoir using machine learning: A case study of the Fuyu oil layer in Bayanchagan area, Songliao Basin central depression. Earth Sci. Inform. 14 (2), 601–617 (2021).

Wang, Z. et al. Quantitative evaluation of unconsolidated sandstone heavy oil reservoirs based on machine learning. Geol. J. (Chichester, England). 58 (6), 2321–2341 (2023).

Fan, J., Shi, J., Wan, X., Xie, Q. & Wang, C. Classification evaluation method for Chang 7 oil group of Yanchang formation in Ordos Basin. J. Pet. Explor. Prod. Te. 12 , 825–834 (2021).

Niu, D. et al. Multi-scale classification and evaluation of shale reservoirs and “sweet spot” prediction of the second and third members of the Qingshankou Formation in the Songliao Basin based on machine learning. J. Petrol Sci. Eng. 216 , 110678 (2022).

Li, C. et al. Oil charging pore throat threshold and accumulation effectiveness of tight sandstone reservoir using the physical simulation experiments combined with NMR. J. Petrol. Sci. Eng. 208 , 109–338 (2022).

Li, S. et al. The dissolution characteristics of the Chang 8 tight reservoir and its quantitative influence on porosity in the Jiyuan area, Ordos Basin, China. J. Nat. Gas Geosci. 3 (2), 95–108 (2018).

Song, X. et al. Analysis of the influence of micro-pore structure on oil occurrence using nano-CT scanning and nuclear magnetic resonance technology: An example from Chang 8 tight sandstone reservoir, Jiyuan, Ordos Basin. Processes 11 , 11274 (2023).

Tong, Q. et al. Research on sand body architecture at the intersection of a bidirectional sedimentary system in the Jiyuan area of Ordos Basin. Sci. Rep. 13 , 12261 (2023).

Fu, S. et al. Accurate characterization of full pore size distribution of tight sandstones by low-temperature nitrogen gas adsorption and high-pressure mercury intrusion combination method. Energy Sci. Eng. 9 (1), 80–100 (2021).

Li, P. et al. Occurrence characteristics and main controlling factors of movable fluids in Chang 81 reservoir, Maling Oilfield, Ordos Basin, China. J. Petrol. Explor. Prod. Technol. 9 (1), 17–29 (2018).

Li, C., Chen, G., Li, X., Zhou, Q. & Sun, Z. The occurrence of tight oil in the Chang 8 lacustrine sandstone of the Huaqing area, Ordos Basin, China: Insights into the content of adsorbed oil and its controlling factors. J. Nat. Gas Geosci. 7 (1), 27–37 (2022).

Gong, Y. & Liu, K. Pore throat size distribution and oiliness of tight sands-A case study of the Southern Songliao Basin, China. J. Petrol. Sci. Eng. 184 , 106508 (2020).

Liu, Y. et al. A novel experimental investigation on the occurrence state of fluids in microscale pores of tight reservoirs. J. Petrol. Sci. Eng. 196 , 107656 (2021).

Sandhya, N. & Baviskar, A. A quick & automated method for measuring. Am. Biol. Teach. 73 (9), 554–556 (2011).

Curtis, M. E., Cardott, B. J. & Sondergeld, C. H. Development to for organic porosity in the Woodford shale with increasing thermal maturity. Int. J. Coal Geol. 26 (31), 26–30 (2012).

Keller, L. M., Schuetz, P. & Erni, R. Characterization of multi-scale micro-structural features in opalinus clay. Microporous Mesoporous Mater. 83 , 84–90 (2013).

Jin, L. et al. Evolution of porosity and geochemistry in Mar cell us formation black shale during weathering. Chem. Geol. 50 , 51–56 (2013).

Rine, J. M. et al. Comparison of porosity distribution with in selected north American shale units by SEM examination of argon-ion-milled samples. Electron Microsc. Shale Hydrocarbon Reserv. AAPG Memoir. 102 , 137–152 (2013).

Zhao, J. Y. et al. A quantitative evaluation for well pattern adaptability in ultra-low permeability oil reservoirs: A case study of Triassic Chang 6 and Chang 8 reservoirs in Ordos Basin. Pet. Explor. Dev. 45 (3), 482–488 (2018).

Dong, Q., Dai Yin, Y. & Ya Zhou, Z. Fine classification of ultra-low permeability reservoirs around the Placanticline of Da Qing oilfield (PR of China). J. Petrol. Sci. Eng. 174 , 1042–1052 (2019).

Gao, Y. et al. Application of an analytic hierarchy process to hydro-carbon accumulation coefficient estimation. Petrol. Sci. 7 (3), 337–346 (2010).

Liu, Y. et al. A reservoir quality evaluation approach for tight sandstone reservoirs based on the gray correlation algorithm: A case study of the Chang 6 layer in the W area of the as oilfield, Ordos Basin. Energy Explor. Exploit. 39 (4), 1027–1056 (2021).

Shi, B., Chang, X., Yin, W., Li, Y. & Mao, L. Quantitative evaluation model for tight sandstone reservoirs based on statistical methods—A case study of the Triassic Chang 8 tight sandstones, Zhenjing area, Ordos Basin, China. J. Petrol. Sci. Eng. 173 , 601–616 (2019).

Liu, B. The analytic hierarchy process for the reservoir evaluation in Chaoyanggou oilfield. Adv. Petrol. Explor. Dev. 6 , 46–50 (2014).

Shang, Y. Z. Application of analytical hierarchy process in the low-grade oil reservoirs evaluation. Daqing Petrol. Geol. Oilfield Dev. 33 , 55–59 (2014).

Xi, Y. et al. Application of analytic hierarchy process in mineral prospecting prediction based on an integrated geology—aerogeophysics—geochemistry model. Minerals 13 (7), 978 (2023).

Lai, F. et al. Crushability evaluation of shale gas reservoir based on analytic hierarchy process. Spec. Oil Gas Reserv. 25 (3), 154–159 (2018).

Elhaj, M.A., Imtiaz, S. A., Naterer, G. F. & Zendehboudi, S. Production optimization of hydrocarbon reservoirs by entropy generation minimization. J. Nat. Gas Sci. Eng . 83 , 103538 (2020).

Szabo, N. P. et al. Cluster analysis of core measurements using heterogeneous data sources: An application to complex Miocene reservoirs. J. Petrol. Sci. Eng. 178 , 575–585 (2019).

Oliveira, G. P., Santos, M. D. & Roque, W. L. Constrained clustering approaches to identify hydraulic flow units in petroleum reservoirs. J. Petrol. Sci. Eng. 186 , 106732 (2020).

Jia, A., Wei, Y. & Jin, Y. Progress in key technologies for evaluating marine shale gas development in China. Petrol. Explor. Dev. 43 (6), 1035–1042 (2016).

Xiao, L., Bi, L., Yi, T., Lei, Y. & Wei, Q. Pore structure characteristics and influencing factors of tight reservoirs controlled by different provenance systems: A case study of the Chang 7 members in Heshui and Xin’anbian of the Ordos Basin. Energies 16 , 34108 (2023).

Dong, J. et al. Pore structure and fractal characteristics of tight sandstone: A case study for Huagang Formation in the Xihu Sag, East China Sea Basin, China. Energies 16 , 20134 (2023).

Gao, J. et al. Study on the coupling law between pore-scale fluid flow capacity and pore-throat configuration in tight sandstone reservoirs. Geofluids 2023 (1), 1693773 (2023).

Zhang, R. et al. Microscopic pore structures and their controlling factors of the lower carboniferous Luzhai Shale in Guizhong depression, China. Geofluids 2023 , 8890709 (2023).

Du, M. et al. Study on the quantitative characterization and heterogeneity of pore structure in deep ultra-high pressure tight glutenite reservoirs. Minerals 13 , 6015 (2023).

Wu, B. H. et al. Integrated classification method of tight sandstone reservoir based on principal component analysis-simulated annealing genetic algorithm-fuzzy cluster means. Petrol. Sci. 20 (5), 2747–2758 (2023).

Lu, X., Xing, X., Hu, K. & Zhou, B. Classification and evaluation of tight sandstone reservoirs based on MK-SVM. Processes. 11 (9), 2678 (2023).

Qiu, X. et al. Quantitative evaluation of reservoir quality of tight oil sandstones in Chang 7 member of Ordos Basin. Front. Earth Sci. 10 , 1046489 (2023).

Download references

Acknowledgements

This research was sponsored by Natural Science Basic Research Plan in Shaanxi Province of China (Grant No. 2017JM4013; Grant No. 2020JQ-798).

Author information

Authors and affiliations.

State Key Laboratory of Continental Dynamics, Northwest University, Xi’an, 710069, China

Xinglei Song, Congjun Feng, Xinhui Pan & Yanlong Ge

Department of Geology, Northwest University, No. 229, Taibai North Road, Xi’an, 710069, Shaanxi, China

School of Petroleum Engineering, Xi’an Shiyou University, Xi’an, 710065, China

Engineering Research Center of Development and Management for Low to Ultra-Low Permeability Oil & Gas Reservoirs in West China, Ministry of Education, Xi’an, 710065, China

Xi’an Key Laboratory of Tight Oil (Shale Oil) Development, Xi’an, 710065, China

PetroChina Research Institute of Petroleum Exploration & Development, Beijing, 100083, People’s Republic of China

School of Petroleum Engineering and Environmental Engineering, Yan’an University, Yan’an, 716000, China

You can also search for this author in PubMed   Google Scholar

Contributions

Xinglei Song: Investigation, Formal analysis, Conceptualization, Data Curation, Writing-Original Draft; Congjun Feng: Writing-Review & Editing, Supervision, Funding acquisition,Methodology; Teng Li: Investigation, Resources, Data Curation, Writing-Review & Editing; Qin Zhang: Investigation, Resources, Data Curation; Xinhui Pan: Supervision, Project administration; Mengsi Sun: Supervision, Writing-Review & Editing, Project administration; Yanlong Ge: Investigation, Resources, Data Curation. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Congjun Feng .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Song, X., Feng, C., Li, T. et al. Quantitative classification evaluation model for tight sandstone reservoirs based on machine learning. Sci Rep 14 , 20712 (2024). https://doi.org/10.1038/s41598-024-71351-0

Download citation

Received : 22 April 2024

Accepted : 27 August 2024

Published : 05 September 2024

DOI : https://doi.org/10.1038/s41598-024-71351-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Tight sandstone
  • Pore structure
  • Quantitative evaluation
  • High-pressure mercury injection
  • Image recognition
  • Machine learning

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

criteria for evaluating a research proposal

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Active funding opportunity

Nsf 23-601: research experiences for undergraduates (reu), program solicitation, document information, document history.

  • Posted: June 29, 2023
  • Replaces: NSF 22-601

Program Solicitation NSF 23-601



Directorate for Biological Sciences

Directorate for Computer and Information Science and Engineering

Directorate for STEM Education

Directorate for Engineering

Directorate for Geosciences

Directorate for Mathematical and Physical Sciences

Directorate for Social, Behavioral and Economic Sciences

Directorate for Technology, Innovation and Partnerships

Office of Integrative Activities

Office of International Science and Engineering

Full Proposal Deadline(s) (due by 5 p.m. submitter's local time):

     September 27, 2023

     August 21, 2024

     Third Wednesday in August, Annually Thereafter

Important Information And Revision Notes

The student stipend amount and the generally expected maximum for total project costs (including other student costs) have been increased.

The non-PI faculty/professionals who will serve as research mentors for students are no longer required to be listed as Senior Personnel in REU Site proposals. However, Collaborators & Other Affiliations (COA) documents for anticipated non-PI research mentors must be uploaded into the Additional Single Copy Documents section of the proposal.

Students' names (as coauthors) are no longer required to be labeled with asterisks (*) in bibliographic citations in the Biographical Sketches of the PI and other Senior Personnel.

NSF's Education & Training Application (ETAP) is described and encouraged as a means of managing student applications and collecting student demographic information. Some NSF units may require their REU Sites to use ETAP.

Proposers are reminded of Federal and NSF non-discrimination statutes and regulations (PAPPG Chapter XI.A), which apply to the selection of students for REU opportunities.

A description of a new partnership with the Department of Energy (DOE), which offers the possibility of DOE co-funding for relevant REU Site proposals, has been added to the "Special Opportunities (Partnerships)" section.

Minor edits and reorganizations of text have been made to improve clarity. Links and references have been updated.

Any proposal submitted in response to this solicitation should be submitted in accordance with the NSF Proposal & Award Policies & Procedures Guide (PAPPG) that is in effect for the relevant due date to which the proposal is being submitted. The NSF PAPPG is regularly revised and it is the responsibility of the proposer to ensure that the proposal meets the requirements specified in this solicitation and the applicable version of the PAPPG. Submitting a proposal prior to a specified deadline does not negate this requirement.

Summary Of Program Requirements

General information.

Program Title:

Research Experiences for Undergraduates (REU) Sites and Supplements
The Research Experiences for Undergraduates (REU) program supports active research participation by undergraduate students in any of the areas of research funded by the National Science Foundation. REU projects involve students in meaningful ways in ongoing research programs or in research projects specifically designed for the REU program. This solicitation features two mechanisms for supporting student research: REU Sites are based on independent proposals to initiate and conduct projects that engage a number of students in research. REU Sites may be based in a single discipline or academic department or may offer interdisciplinary or multi-department research opportunities with a coherent intellectual theme. REU Supplements may be included as a component of proposals for new or renewal NSF grants or cooperative agreements or may be requested for ongoing NSF-funded research projects. REU projects with an international dimension are welcome. Undergraduate student participants in either REU Sites or REU Supplements must be U.S. citizens, U.S. nationals, or U.S. permanent residents. Students do not apply to NSF to participate in REU activities, and NSF does not select students for the opportunities. Investigators who receive REU awards establish their own process for receiving and reviewing applications and selecting students, and students follow the instructions provided by each REU Site or REU Supplement to apply. (In some cases, investigators pre-select students for REU Supplements.) To identify appropriate REU Sites, students should consult the directory of active REU Sites on the Web at https://www.nsf.gov/crssprgm/reu/reu_search.cfm .

Cognizant Program Officer(s):

Please note that the following information is current at the time of publishing. See program website for any updates to the points of contact.

  • NSF REU Site Contacts: https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp
  • 47.041 --- Engineering
  • 47.049 --- Mathematical and Physical Sciences
  • 47.050 --- Geosciences
  • 47.070 --- Computer and Information Science and Engineering
  • 47.074 --- Biological Sciences
  • 47.075 --- Social Behavioral and Economic Sciences
  • 47.076 --- STEM Education
  • 47.079 --- Office of International Science and Engineering
  • 47.083 --- Office of Integrative Activities (OIA)
  • 47.084 --- NSF Technology, Innovation and Partnerships

Award Information

Anticipated Type of Award: Standard Grant or Continuing Grant or Cooperative Agreement

Estimated Number of Awards: 1,300 to 1,350

This estimate includes approximately 175 new Site awards and 1,150 new Supplement awards each year.

Anticipated Funding Amount: $84,800,000

in FY 2024 — This estimate includes both Sites and Supplements, pending availability of funds.

Eligibility Information

Who May Submit Proposals:

The categories of proposers eligible to submit proposals to the National Science Foundation are identified in the NSF Proposal & Award Policies & Procedures Guide (PAPPG), Chapter I.E. Unaffiliated individuals are not eligible to submit proposals in response to this solicitation.

Who May Serve as PI:

For REU Site proposals, a single individual may be designated as the Principal Investigator. This individual will be responsible for overseeing all aspects of the award. However, one additional person may be designated as Co-Principal Investigator if developing and operating the REU Site would involve such shared responsibility. After a proposal is awarded , some NSF units may allow the addition of more Co-PIs if an exceptional case can be made for why the management of the REU Site must be distributed.

Limit on Number of Proposals per Organization:

There are no restrictions or limits.

Limit on Number of Proposals per PI or co-PI:

Proposal Preparation and Submission Instructions

A. proposal preparation instructions.

  • Letters of Intent: Not required
  • Preliminary Proposal Submission: Not required
  • Full Proposals submitted via Research.gov: NSF Proposal and Award Policies and Procedures Guide (PAPPG) guidelines apply. The complete text of the PAPPG is available electronically on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .
  • Full Proposals submitted via Grants.gov: NSF Grants.gov Application Guide: A Guide for the Preparation and Submission of NSF Applications via Grants.gov guidelines apply (Note: The NSF Grants.gov Application Guide is available on the Grants.gov website and on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=grantsgovguide ).

B. Budgetary Information

C. due dates, proposal review information criteria.

Merit Review Criteria:

National Science Board approved criteria. Additional merit review criteria apply. Please see the full text of this solicitation for further information.

Award Administration Information

Award Conditions:

Standard NSF award conditions apply.

Reporting Requirements:

Additional reporting requirements apply. Please see the full text of this solicitation for further information.

I. Introduction

Research Experiences for Undergraduates (REU) is a Foundation-wide program that supports active participation in science, engineering, and education research by undergraduate students. REU proposals are welcome in any of the research areas supported by NSF (see https://new.nsf.gov/funding ), including the priority areas and cross-cutting areas that NSF identifies on its website and in its annual Budget Request to Congress ( https://new.nsf.gov/budget ).

The REU program seeks to expand student participation in all kinds of research — both disciplinary and interdisciplinary — encompassing efforts by individual investigators, groups, centers, national facilities, and others. It draws on the integration of research and education to attract a diverse pool of talented students into careers in science and engineering (including teaching and education research related to science and engineering) and to help ensure that these students receive the best education possible.

This solicitation features two mechanisms for support of student research: REU Sites and REU Supplements .

II. Program Description

Research experience is one of the most effective avenues for attracting students to and retaining them in science and engineering and for preparing them for careers in these fields. The REU program, through both Sites and Supplements, aims to provide appropriate and valuable educational experiences for undergraduate students through participation in research. REU projects involve students in meaningful ways in ongoing research programs or in research projects specifically designed for the REU program. REU projects feature high-quality interaction of students with faculty and/or other research mentors and access to appropriate facilities and professional development opportunities.

REU projects offer an opportunity to increase the participation of the full spectrum of the nation's diverse talent in STEM. Several million additional people — specifically, individuals from groups historically underrepresented in STEM fields — are needed for the U.S. science and engineering workforce to reflect the demographics of the U.S. population. (See the reports Vision 2030 [ https://nsf.gov/nsb/publications/vision2030.pdf ], The STEM Labor Force of Today [ https://ncses.nsf.gov/pubs/nsb20212/ ], and Diversity and STEM: Women, Minorities, and Persons with Disabilities [ https://ncses.nsf.gov/pubs/nsf23315/ ].) Reaching these "missing millions" is central to the nation's economic competitiveness and is a priority for NSF.

Historically, the vast majority of REU participants have been junior- or senior-level undergraduates — students who have typically already committed to a major in science or engineering. So that the REU program can succeed in attracting students into science and engineering who might not otherwise consider those majors and careers, projects are encouraged to involve students at earlier stages in their college experience. Some REU projects effectively engage first-year and second-year undergraduates by developing partnerships with community colleges.

NSF welcomes proposals that include efforts to broaden geographic and demographic participation in REU projects. Proposals involving experienced researchers at institutions in EPSCoR-eligible jurisdictions , minority-serving institutions, and emerging research institutions are encouraged.

REU projects may be carried out during the summer months, during the academic year, or both.

International REU Projects

The REU program welcomes projects with an international dimension. International REU Sites (iREUs) or Supplements usually involve a partnership between U.S. researchers and collaborators at a foreign institution or organization. These projects are expected to entail (1) true intellectual collaboration with a foreign partner and (2) benefits to the students from the unique expertise, skills, facilities, phenomena, or other resources that the foreign collaborator or research environment provides. International REU projects generally have higher travel costs and a higher per-student cost than domestic projects. They also often have more complex logistics and require a more complex mentoring arrangement.

Proposals for international REU projects should include a description of the foreign collaborator's role in the project; a Biographical Sketch of up to two pages (in any format) for the foreign collaborator, uploaded in the Other Supplementary Documents section of the proposal; and a letter of collaboration from the foreign institution or organization, which assures that the foreign institution or organization is committed to the collaboration and will give students appropriate access to facilities.

Investigators planning an international REU project should discuss their idea with the relevant program officer — either the REU Site contact for the relevant discipline ( https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp ) in the case of an international REU Site proposal, or the cognizant program officer for the underlying award in the case of an REU Supplement request.

NSF's International Research Experiences for Students (IRES) program, which is managed by NSF's Office of International Science and Engineering (OISE), also supports proposals for cohorts of U.S. students to engage in international research.

Research Experiences for Teachers

NSF encourages research experiences for K-12 teachers of science, technology, engineering, and mathematics and the coordination of these experiences with REU projects. Most directorates support Research Experiences for Teachers (RET) as a formal activity and announce their specific interests (e.g., RET Sites, RET Supplements) either in solicitations, in Dear Colleague Letters, or on directorate/division websites. Other NSF units have no formal announcement but respond to requests for RET support on a case-by-case basis or permit the inclusion of an RET component (with a distinct description and cost breakdown) as part of an REU proposal. Teachers may also be included in an international REU project. Proposers who wish to include an RET component in an REU proposal may wish to contact the appropriate REU program officer for guidance. REU Site proposals that include a significant RET component should begin the project title with the label "REU/RET Site:" to ensure appropriate tracking at NSF.

A. REU SITES

REU Sites are based on independent proposals, submitted for an annual deadline date, to initiate and conduct projects that engage a number of undergraduate students in research.

REU Sites must have a well-defined common focus that enables a cohort experience for students. Sites may be based in a single discipline or academic department or may offer interdisciplinary or multi-department research opportunities with a coherent intellectual theme. (Although interdisciplinary or multi-department proposals must be submitted to a single NSF disciplinary unit, these proposals are often reviewed by two or more NSF units, at the discretion of the NSF program officer who manages the proposal.) A proposal should reflect the unique combination of the proposing organization's interests and capabilities and those of any partnering organizations. Cooperative arrangements among organizations and research settings may be considered so that a project can increase the quality or availability of undergraduate research experiences. To extend research opportunities to a larger number of undergraduates, proposers may incorporate approaches that make use of cyberinfrastructure or other technologies that facilitate research, learning, and collaboration over distances ("virtual projects").

REU Sites are an important means for extending high-quality research environments and mentoring to diverse groups of students. In addition to increasing the participation of students from underrepresented groups in research, the program aims to involve students who might not otherwise have research opportunities, particularly those from academic institutions where research programs in STEM are limited. Thus, a significant fraction of the student participants at an REU Site must come from outside the host institution or organization, and at least half of the student participants must be recruited from academic institutions where research opportunities in STEM are limited (including two-year colleges).

High-quality mentoring for the student participants is very important in REU Sites. Grantees must ensure that research mentors receive appropriate training or instruction, both to promote the quality and success of the students' research and to reinforce expectations for positive, professional interactions between mentors and students. REU Sites should also encourage continued interaction of mentors with students during the academic year, to the extent practicable, to help connect students' research experiences to their overall course of study and to help the students achieve success in courses of study leading to a baccalaureate degree in a STEM field.

Three years is the typical duration for REU Site awards in most NSF directorates; however, a duration of up to five years may be allowed in some cases. New REU Sites are encouraged to apply for no more than three years of funding. Proposals for renewal REU Sites are welcome, but the PI should discuss the project duration with the cognizant program officer prior to requesting support for more than three years. Investigators are reminded that renewal proposals will be reviewed through the normal merit review process and there is no guarantee that a renewal grant will be awarded.

The REU Site Contacts web page ( https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp ) provides contact information for the REU program officers in each NSF disciplinary unit that manages REU Sites, and that page also lists discipline-specific REU web pages for units that have them. Prospective PIs should consult those web pages or the points of contact for more specific information about characteristics of REU Sites that vary by discipline.

Special Opportunities (Partnerships)

Some proposers for REU Sites might be interested in the following opportunities. These are optional ; proposals are not required to respond to them.

Partnership with the Department of Defense

For over two decades, NSF has engaged in a partnership with the Department of Defense (DoD) to expand undergraduate research opportunities in DoD-relevant research areas through the REU Sites program. The DoD activity is called Awards to Stimulate and Support Undergraduate Research Experiences (ASSURE). Any proposal submitted to NSF for the REU Sites program that is recommended for funding through the NSF merit review process may be considered by DoD representatives for possible support through ASSURE. Proposals that are selected for the DoD funding will involve DoD-relevant research and may come from any of the NSF directorates or offices that handle REU Site proposals.

A proposer to the NSF REU Sites program does not need to take any additional steps to be considered for funding through ASSURE. Investigators who are interested in the opportunity may e-mail [email protected] with any questions.

Partnership with the Department of Energy

NSF's Engineering Directorate (ENG) engages in a partnership with the Department of Energy (DOE) to expand undergraduate research opportunities in DOE mission-relevant areas through the REU Sites program. REU Site proposals that are managed by ENG will be considered for DOE funding. Such proposals will involve DOE mission-relevant topics, which include, but are not limited to, electric power sector research; clean energy technology research; and risk science, decision science, social science, and data science using power sector data sets.

Proposals that are considered for co-funding by DOE will be shared with DOE staff to assess alignment with DOE's research interests, and the unattributed reviews and panel summaries for those proposals will also be shared with DOE.

A proposer to the REU Sites program in ENG does not need to take any additional steps to be considered for co-funding through this partnership. Investigators who are interested in the opportunity may e-mail [email protected] with any questions.

Partnership with the Semiconductor Research Corporation (SRC)

In early 2022, the Semiconductor Research Corporation (SRC) and NSF's REU Sites program launched a partnership to expand undergraduate research opportunities related to advancements in semiconductors. This partnership fosters the development of a diverse science and engineering workforce skilled in an area of high national priority. Proposals for REU Sites that involve research that advances semiconductors may be supported as part of this partnership and may come from NSF's Directorate for Engineering, Division of Materials Research, Division of Physics, or Division of Chemistry. Research involving the monolithic and heterogeneous integration of 3D integrated devices and circuits is of special interest. Areas of technical interest include, but are not limited to, materials, devices, circuits, wafer fabrication processes and techniques, packaging materials and processes, thermal management and modeling, and integrated photonics, design, and testing. Also relevant are the critical Systems & Technology (S&T) themes described in SRC's JUMP 2.0 research announcement and resulting JUMP 2.0 research center selections .

Proposals that are considered for co-funding by SRC will be shared with SRC staff to assess alignment with SRC's research interests, and the unattributed reviews and panel summaries for those proposals will also be shared with SRC.

A proposer to the NSF REU Sites program does not need to take any additional steps to be considered for co-funding through this partnership. Investigators who are interested in the opportunity may e-mail [email protected] with any questions.

B. REU SUPPLEMENTS

An REU Supplement typically provides support for one or two undergraduate students to participate in research as part of a new or ongoing NSF-funded research project. However, centers or large research efforts may request support for a number of students commensurate with the size and nature of the project. REU Supplements are supported by the various research programs throughout the Foundation, including programs such as Small Business Innovation Research (SBIR).

High-quality mentoring is important in REU Supplements, just as it is in REU Sites, and investigators should give serious attention not only to developing students' research skills but also to involving them in the culture of research in the discipline and connecting their research experience with their overall course of study.

Investigators are reminded that support for undergraduate students involved in carrying out research under NSF awards should be included as part of the research proposal itself instead of as a post-award supplement to the research proposal, unless such undergraduate participation was not foreseeable at the time of the original proposal.

A request for an REU Supplement may be submitted in either of two ways: (1) Proposers may include an REU Supplement activity as a component of a new (or renewal) research proposal to NSF. For guidance, contact the program officer who manages the research program to which the proposal would be submitted. (2) Investigators holding an existing NSF research award may submit a post-award request for supplemental funding. For guidance, contact the cognizant program officer for the NSF grant or cooperative agreement that would be supplemented.

For a post-award REU Supplement request, the duration may not exceed the term of the underlying research project.

III. Award Information

An REU activity may be funded as a standard or continuing grant (for REU Sites), as a supplement to an existing award, or as a component of a new or renewal grant or cooperative agreement. REU Sites and Supplements are funded by various disciplinary and education research programs throughout NSF, and the number of awards made varies across the Foundation from year to year, as does the amount of funds invested.

Three years is the typical duration for REU Site awards in most NSF units; however, a duration of up to five years may be allowed in some cases. The typical REU Site hosts 8-10 students per year. The typical funding amount is $100,000-$155,000 per year, although NSF does not dictate a firm upper (or lower) limit for the amount, which depends on the number of students hosted and the number of weeks.

The REU experience is a research training experience paid via a stipend, not employment (work) paid with a salary or wage. In this case, the student's training consists of closely mentored independent research. For administrative convenience, organizations may choose to issue payments to REU students using their normal payroll system. (This is an option, not a recommendation. The mechanism used to pay the stipend does not affect the nature of the student activity.) The funds received by students may be taxable income under the Internal Revenue Code of 1986 and may also be subject to state or local taxes. Please consult the Internal Revenue Service (IRS) for additional information. Students might find the IRS's "Tax Benefits for Education" website to be particularly helpful.

The estimated program budget, number of awards, and average award size/duration are subject to the availability of funds.

IV. Eligibility Information

Additional Eligibility Info:

Eligible Student Participants: Undergraduate student participants supported with NSF funds in either REU Supplements or REU Sites must be U.S. citizens, U.S. nationals, or U.S. permanent residents. An undergraduate student is a student who is enrolled in a degree program (part-time or full-time) leading to a baccalaureate or associate degree. Students who are transferring from one college or university to another and are enrolled at neither institution during the intervening summer may participate. High school graduates who have been accepted at an undergraduate institution but who have not yet started their undergraduate study are also eligible to participate. Students who have received their bachelor's degrees and are no longer enrolled as undergraduates are generally not eligible to participate. Some NSF directorates/divisions encourage inclusion in the REU program of K-12 teachers of science, technology, engineering, and mathematics. Please contact the appropriate disciplinary program officer for guidance. For REU Sites, a significant fraction of the student participants should come from outside the host institution or organization. Within the framework of the basic eligibility guidelines outlined above, most REU Sites and Supplements further define recruitment and selection criteria, based on the nature of the particular research and other factors. Investigators are reminded that they may not use race, ethnicity, sex, age, or disability status as an eligibility criterion. Selection of REU participants must be done in compliance with non-discrimination statutes and regulations; see PAPPG Chapter XI.A. Eligibility Restrictions Associated with the SRC-NSF Partnership: Because of the partnership between the Semiconductor Research Corporation (SRC) and the REU Sites program, SRC and its employees and assignees are ineligible to be involved in any proposals submitted to this solicitation, including as unfunded collaborators, via letters of collaboration or support, or through other means. Employees of SRC member companies (see below) are eligible to be involved in proposals submitted to this solicitation, including as unfunded collaborators, via letters of collaboration, or through other means. REU Site proposals involving employees of SRC member companies participating in the SRC-REU partnership activity are not eligible to receive SRC co-funding but may be funded using NSF REU funds. Participating SRC member companies include Analog Devices, Arm, Boeing, EMD Electronics, GlobalFoundries, HRL Laboratories, IBM, Intel, MediaTek, Micron, Qorvo, Raytheon Technologies, Samsung, SK hynix, and TSMC.

V. Proposal Preparation And Submission Instructions

Full Proposal Preparation Instructions : Proposers may opt to submit proposals in response to this Program Solicitation via Research.gov or Grants.gov.

  • Full Proposals submitted via Research.gov: Proposals submitted in response to this program solicitation should be prepared and submitted in accordance with the general guidelines contained in the NSF Proposal and Award Policies and Procedures Guide (PAPPG). The complete text of the PAPPG is available electronically on the NSF website at: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg . Paper copies of the PAPPG may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] . The Prepare New Proposal setup will prompt you for the program solicitation number.
  • Full proposals submitted via Grants.gov: Proposals submitted in response to this program solicitation via Grants.gov should be prepared and submitted in accordance with the NSF Grants.gov Application Guide: A Guide for the Preparation and Submission of NSF Applications via Grants.gov . The complete text of the NSF Grants.gov Application Guide is available on the Grants.gov website and on the NSF website at: ( https://www.nsf.gov/publications/pub_summ.jsp?ods_key=grantsgovguide ). To obtain copies of the Application Guide and Application Forms Package, click on the Apply tab on the Grants.gov site, then click on the Apply Step 1: Download a Grant Application Package and Application Instructions link and enter the funding opportunity number, (the program solicitation number without the NSF prefix) and press the Download Package button. Paper copies of the Grants.gov Application Guide also may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] .

In determining which method to utilize in the electronic preparation and submission of the proposal, please note the following:

Collaborative Proposals. All collaborative proposals submitted as separate submissions from multiple organizations must be submitted via Research.gov. PAPPG Chapter II.E.3 provides additional information on collaborative proposals.

See PAPPG Chapter II.D.2 for guidance on the required sections of a full research proposal submitted to NSF. Please note that the proposal preparation instructions provided in this program solicitation may deviate from the PAPPG instructions.

Note that the REU Site Contacts web page ( https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp ) provides contact information for the REU program officers in each NSF disciplinary unit that manages REU Sites, and that page also lists discipline-specific REU web pages for units that have them. Prospective PIs should consult those web pages or the points of contact for more specific information about characteristics of REU Sites that vary by discipline.

A. PROPOSAL FOR REU SITE

The following instructions supplement those found in the PAPPG or NSF Grants.gov Application Guide.

Proposal Setup: In Research.gov, select "Prepare New Full Proposal" or "Prepare New Renewal Proposal" (* see Note below), as appropriate. Search for and select this Funding Opportunity in Step 1 of the proposal preparation wizard. (Grants.gov users: The program solicitation will be pre-populated by Grants.gov on the NSF Grant Application Cover Page.) Select the Directorate/Office to which the proposal is directed, and if applicable, select the appropriate Division(s).

If the proposal has an interdisciplinary/cross-disciplinary research focus, choose the Directorate/Office/Division that seems most relevant (often this is the unit corresponding to the departmental affiliation of the Principal Investigator), and NSF staff will ensure that the proposal is reviewed by individuals who have expertise that is appropriate to the proposal's content. (Often such proposals are co-reviewed by two or more NSF disciplinary units.)

The REU-associated program within the Division(s) that you selected will appear automatically in the Program field in Research.gov. (Grants.gov users should refer to Section VI.1.2. of the NSF Grants.gov Application Guide for specific instructions on how to designate the NSF Unit of Consideration.)

* Note : If the proposal is requesting continued funding for a previously funded REU Site but you were not the PI or Co-PI on the previous award , Research.gov will not allow preparation of the proposal as a "Renewal Proposal"; you will need to use the "Full Proposal" option. However, the relevant "Project Element" in the Project Summary (see below) should indicate that the proposal is a "renewal," and the outcomes of the previous Site should be described in the "Results from Prior NSF Support" section of the Project Description.

Proposal Title . Begin the Proposal Title with the label "REU Site:" and carefully choose a title that will permit prospective student applicants to easily identify the focus of the site.

Personnel (Cover Sheet) . A single individual should be designated as the Principal Investigator (PI); this individual will be responsible for overseeing all aspects of the award. One additional person may be designated as Co-PI if developing and operating the REU Site would involve such shared responsibility.

Project Summary (limited to one page). The "Overview" section of the Project Summary must begin with the following list of "Project Elements":

PROJECT ELEMENTS:

  • New REU Site, or renewal of previously funded REU Site (provide previous NSF Award Number)? (* see Note at the end of "Proposal Setup" above)
  • Project title (as shown on Cover Sheet): "REU Site: ..."
  • Principal Investigator:
  • Submitting organization:
  • Other organizations involved in the project's operation:
  • Location(s) (universities, national labs, field stations, etc.) at which the proposed undergraduate research will occur:
  • Main field(s), sub-field(s), and keywords describing the research topic(s):
  • No. of undergraduate participants per year:
  • Summer REU Site, or academic year REU Site?:
  • No. of weeks per year that the students will participate:
  • Does the project include an international component or an RET component?:
  • Name, phone number, and e-mail address of point of contact for student applicants:
  • Web address (URL) for information about the REU Site (if known):

In the remainder of the Project Summary, briefly describe the project's objectives, activities, students to be recruited, and intended impact. Provide separate statements on the intellectual merit and broader impacts of the proposed activity, as required by the PAPPG.

Project Description . Address items "(a)" through "(g)" below. The Project Description must not exceed 15 pages and must contain a separate section labeled "Broader Impacts" within the narrative.

(a) Overview. Provide a brief description of the objectives of the proposed REU Site, targeted student participants, intellectual focus, organizational structure, timetable, and participating organizations' commitment to the REU activity.

(b) Nature of Student Activities . Proposals should address the approach to undergraduate research training being taken and should provide detailed descriptions of examples of research projects that students will pursue. So that reviewers can evaluate intellectual merit, this discussion should indicate the significance of the research area and, when appropriate, the underlying theoretical framework, hypotheses, research questions, etc. Undergraduate research experiences have their greatest impact in situations that lead the students from a relatively dependent status to as independent a status as their competence warrants. Proposals must present plans that will ensure the development of student-faculty interaction and student-student communication. Development of collegial relationships and interactions is an important part of the project.

(c) The Research Environment . This subsection should describe the history and characteristics of the host organization(s) or research setting(s) with respect to supporting undergraduate research. This subsection should also outline the expertise, experience, and history of involvement with undergraduate research of the PI and the faculty who are anticipated to serve as research mentors. The description should include information on the record of the research mentors in publishing work involving undergraduate authors and in providing professional development opportunities for student researchers. This subsection should also discuss the diversity of the mentor pool and any plans by which mentoring relationships will be sustained after students leave the REU Site.

(d) Student Recruitment and Selection . The overall quality of the student recruitment and selection processes and criteria will be an important element in the evaluation of the proposal. The recruitment plan should be described with as much specificity as possible, including the types and/or names of academic institutions where students will be recruited and the efforts that will be made to attract members of underrepresented groups (women, minorities, and persons with disabilities). Investigators are encouraged to conduct comprehensive outreach, awareness, and recruitment efforts to encourage students representing the full spectrum of diverse talent in STEM to apply for REU opportunities. In general, the goal should be to achieve a diverse pool of applicants and then to consider all eligible applicants in that diverse pool when selecting students for the opportunities.

Mention how the Site will receive applications. Be aware that NSF offers the NSF Education & Training Application (ETAP) as one approach, as described in Section VII.C. (Reporting Requirements) below. (Use of ETAP may be required by some NSF units.)

A significant fraction of the student participants at an REU Site must come from outside the host institution or organization, and at least half of the student participants must be recruited from academic institutions where research opportunities in STEM are limited (including two-year colleges). The number of students per project should be appropriate to the institutional or organizational setting and to the manner in which research is conducted in the discipline. The typical REU Site hosts eight to ten students per year. Proposals involving fewer than six students per year are discouraged.

Undergraduate student participants supported with NSF funds in either REU Sites or REU Supplements must be U.S. citizens, U.S. nationals, or U.S. permanent residents.

Investigators are reminded that they may not use race, ethnicity, sex, age, or disability status as an eligibility criterion for applicants. Selection of REU participants must be done in compliance with non-discrimination statutes and regulations; see PAPPG Chapter XI.A.

(e) Student and Mentor Professional Development and Expectations of Behavior. This subsection should describe (1) plans for student professional development, including training in the responsible and ethical conduct of research; (2) how research mentors have been or will be selected; (3) the training, mentoring, or monitoring that research mentors have received or will receive to help them mentor students effectively during the research experience; and (4) the REU Site's plans for communicating information on expectations of behavior to ensure a safe, respectful, inclusive, harassment-free environment for all participants.

NSF does not tolerate sexual harassment, or any other form of harassment, where NSF-funded activities take place. Proposers are required to have a policy or code of conduct that addresses sexual harassment, other forms of harassment, and sexual assault. Proposers must provide an orientation for all participants in the REU Site (REU students, faculty, postdocs, graduate students, other research mentors, etc.) to cover expectations of behavior to ensure a safe and respectful environment for all participants, and to review the organization's policy or code of conduct addressing sexual harassment, other forms of harassment, and sexual assault, including reporting and complaint procedures. For additional information, see the NSF policies at https://www.nsf.gov/od/oecr/harassment.jsp and the "Promising Practices" at https://www.nsf.gov/od/oecr/promising_practices/index.jsp .

For REU Sites that will involve research off-campus or off-site, proposers are reminded that when submitting the proposal, the AOR must complete a certification that the organization has a plan in place to ensure a safe and inclusive working environment for the REU project, as described in PAPPG Chapter II.E.9.

(f) Project Evaluation and Reporting . Describe the plan to measure qualitatively and quantitatively the success of the project in achieving its goals, particularly the degree to which students have learned and their perspectives on science, engineering, or education research related to these disciplines have been expanded. Evaluation may involve periodic measures throughout the project to ensure that it is progressing satisfactorily according to the project plan, and may involve pre-project and post-project measures aimed at determining the degree of student learning that has been achieved. In addition, it is highly desirable to have a structured means of tracking participating students beyond graduation, with the aim of gauging the degree to which the REU Site experience has been a lasting influence in the students' career paths. Proposers may wish to consult The 2010 User-Friendly Handbook for Project Evaluation for guidance on the elements in a good evaluation plan. Although not required, REU Site PIs may wish to engage specialists in education research (from their organization or another one) in planning and implementing the project evaluation.

(g) Results from Prior NSF Support (if applicable) . If the PI has received NSF support within the past five years, or if the proposal is requesting renewal of an existing REU Site, or if the department or center (or similar organizational subunit) that will host the proposed Site has hosted another REU Site during the past five years, provide information about the prior support as described in PAPPG Chapter II.D.2.d.(iii).

The REU program is particularly interested in the outcomes of the related prior REU Site award (if any). Those outcomes should be described in sufficient detail to permit reviewers to reach an informed conclusion regarding the value of the results achieved. Valuable information typically includes results from the project evaluation; summary information about recruiting efforts and the number of applicants, the demographic make-up of participants and their home institutions, and career choices of participants; and a list of publications or reports (already published or to be submitted) resulting from the NSF award.

References Cited . A list of bibliographic citations relevant to the proposal must be included.

Budget and Budget Justification . The focus of REU Sites is the student experience, and the budget must reflect this principle. Project costs must be predominantly for student support , which usually includes such items as participant stipends, housing, meals, travel, and laboratory use fees. Costs in budget categories outside Participant Support must be modest and reasonable. For example, for summer REU Sites, many NSF units consider up to one month of salary for the PI, or distributed among the PI and other research mentors, to be appropriate for time spent administering and coordinating the REU Site, training mentors, and similar operational activities. Other NSF units consider slightly larger salary requests to be appropriate. (NSF expects that research mentors will be supported with appropriate salary for their research activities, though not necessarily through the REU grant.) Some budgets include costs for limited travel by project personnel and for various activities that enhance students' professional development.

An REU Site may not charge students an application fee. An REU Site may not charge students tuition, or include tuition in the proposal budget, as a requirement for participation (although it is permissible to offer students the option of earning academic credit for participation). An REU Site may not charge students for access to common campus facilities such as libraries or athletic facilities.

Student stipends for summer REU Sites are expected to be approximately $700 per student per week. Other student costs include housing, meals, travel, and laboratory use fees and usually vary depending on the location of the site. Amounts for academic-year REU Sites should be comparable on a pro rata basis. All student costs should be entered as Participant Support Costs. Indirect costs (F&A) are not allowed on Participant Support Costs.

Total project costs — including all direct costs and indirect costs — are generally expected not to exceed $1,550 per student per week. However, projects that involve exceptional circumstances, such as international activities, field work in remote locations, a Research Experiences for Teachers (RET) component, etc., may exceed this limit.

The Budget Justification should explain and justify all major cost items, including any unusual costs or exceptional circumstances, and should address the cost-effectiveness of the project. As noted above, projects that involve an international component or field work in remote locations often have larger budgets than other projects. This feature is understandable, but the extra costs, with detailed breakdown, should be described in the Budget Justification.

So as not to create a financial hardship for students, REU Sites are encouraged to pay students their stipend and living expenses on a regular basis or at least on an incremental basis — not, for example, in a lump sum at the end of the summer.

Although the informal seminars, field trips, and similar gatherings through which students interact and become attuned to the culture of research and their discipline are often vital to the success of undergraduate research experiences, proposers are reminded that costs of entertainment, amusement, diversion, and social activities, and any expenses directly associated with such activities (such as meals, lodging, rentals, transportation, and gratuities), are unallowable in the proposal budget. Federal/NSF funds may not be used to support these expenses. However, costs of "working meals" at seminars and other events at which student participation is required and for which there is a formal agenda are generally allowable.

When preparing proposals, PIs are encouraged to consult the discipline-specific web pages (for units that have them) or to contact the appropriate disciplinary REU program officer (see https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp ) with any questions about the budget or the appropriateness of charges in it.

Facilities, Equipment, and Other Resources . Complete this section in accordance with the instructions in the PAPPG.

Senior Personnel Documents . Provide Biographical Sketches, Current & Pending Support information, and Collaborators & Other Affiliations information for Senior Personnel.

The REU program no longer requires that non-PI faculty/professionals who are anticipated to serve as research mentors be designated as Senior Personnel. Therefore, Biographical Sketches and Current & Pending Support information for those faculty/professionals are not required. The program also no longer requires that students' names (as coauthors) be labeled with an asterisk (*) in Biographical Sketches. As indicated above, the Project Description should list the anticipated research mentors and outline their expertise, experience, and history of mentoring undergraduates in research.

However, to assist NSF in managing reviewer selection, Collaborators & Other Affiliations information is required for each anticipated non-PI research mentor. Use the COA Excel template to collect this information for each mentor, convert each .xlsx file to PDF, and upload the PDF files in the Additional Single Copy Documents section of the proposal (instead of the Senior Personnel Documents section).

Data Management Plan . Complete this section in accordance with the instructions in the PAPPG.

Postdoctoral Mentoring Plan . If applicable, complete this section in accordance with the instructions in the PAPPG.

Other Supplementary Documents. The proposal may include up to ten signed letters of collaboration documenting collaborative arrangements of significance to the proposal (see PAPPG Chapter II.D.2.i(iv)). For an international REU Site, a letter of collaboration from the foreign institution or organization should be included. The letters may be scanned and uploaded into the Other Supplementary Documents section.

For an international REU Site proposal, a Biographical Sketch of up to two pages (in any format) for the foreign collaborator should be included in the Other Supplementary Documents section.

If the project will employ an external evaluator, a Biographical Sketch of up to two pages (in any format) for that professional may be included in the Other Supplementary Documents section.

Additional Single Copy Documents. As indicated above, a Collaborators & Other Affiliations document for each anticipated non-PI research mentor must be uploaded (as a PDF file) into the Additional Single Copy Documents section.

B. REQUEST FOR REU SUPPLEMENT

Many of the research programs throughout the Foundation support REU activities that are requested either (1) as a component of a new (or renewal) research proposal or (2) as a post-award supplement to an existing grant or cooperative agreement. Specific guidance for the use of either mechanism is given in the last two paragraphs of this section (below).

Contacts: For guidance about preparing an REU Supplement request as a component of a new (or renewal) research proposal, contact the program officer who manages the relevant research program. For guidance about preparing an REU Supplement request for an existing NSF award, contact the program officer assigned to the NSF award that would be supplemented. Do not contact the list of disciplinary REU program officers at https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp about REU Supplements.

Regardless of which mechanism is used to request an REU Supplement, the description of the REU activity should discuss the following: (1) the nature of each prospective student's involvement in the research project; (2) the experience of the PI (or other prospective research mentors) in involving undergraduates in research, including any previous REU Supplement support and the outcomes from that support; (3) the nature of the mentoring that the student(s) will receive; and (4) the process and criteria for selecting the student(s). If a student has been pre-selected (as might be true in the case of a supplement for an ongoing award), then the grounds for selection and a brief Biographical Sketch of the student should be included. (PIs are reminded that the student[s] must be a U.S. citizen, U.S. national, or U.S. permanent resident.)

Normally, funds may be requested for up to two students, but exceptions will be considered for training additional qualified students who are members of underrepresented groups. Centers or large research efforts may request support for a number of students commensurate with the size and nature of the project.

Student stipends for summer projects are expected to be comparable to those of REU Site participants, approximately $700 per student per week. Other student costs include housing, meals, travel, and laboratory use fees and usually vary depending on location. Amounts for academic-year projects should be comparable on a pro rata basis.

Total costs for a summer — including all direct costs and indirect costs — are generally expected not to exceed $1,550 per student per week. However, projects that involve international activities, field work in remote locations, or other exceptional circumstances may exceed this limit.

Results from any REU Supplement activities must be included in the annual project report for the associated award. The term of an REU Supplement may not exceed that of the associated award.

A request for an REU Supplement as part of a proposal for a new or renewal grant or cooperative agreement should be embedded in the proposal as follows. Include a description of the REU activity (namely, the information described above in the fourth paragraph under the subheading "B. REQUEST FOR REU SUPPLEMENT") in the Other Supplementary Documents section. Limit this description to three pages. Include the budget for the REU activity in the yearly project budget. Enter all student costs under Participant Support Costs. (Indirect costs [F&A] are not allowed on Participant Support Costs.) As part of the Budget Justification, provide a separate explanation of the REU Supplement request, with the proposed student costs itemized and justified and a total given for the items plus associated indirect costs.

If the intent is to engage students as technicians, then an REU Supplement is not the appropriate support mechanism; instead, support should be entered on the Undergraduate Students line of the proposal budget.

A request for an REU Supplement to an existing NSF award may be submitted if the need for the undergraduate student support was not foreseen at the time of the original proposal submission. Before preparing a request for supplemental funding, the PI should discuss it with the cognizant program officer for the award unless the PI is responding to a Dear Colleague Letter or other announcement that specifically calls for REU Supplement requests. The PI should prepare the request in Research.gov in accordance with the guidelines found in the PAPPG. The following instructions supplement those found in the PAPPG. After logging into Research.gov, choose "Supplemental Funding Requests" (under "Awards & Reporting") and then "Prepare New Supplement." Next, select the award to be supplemented. In the form entitled "Summary of Proposed Work," state that this is a request for an REU Supplement. In the form entitled "Justification for Supplemental Funding," include the information described above in the fourth paragraph under the subheading "B. REQUEST FOR REU SUPPLEMENT"; limit your response to three pages. If an REU student has been pre-selected, you may upload a Biographical Sketch for the student (up to two pages, in any format) in the Other Supplementary Documents section. Prepare a budget, including a justification of the funds requested for student support and their proposed use. All student costs should be entered as Participant Support Costs (Line F) in the proposal budget. (Indirect costs [F&A] are not allowed on Participant Support Costs.)

Cost Sharing:

Inclusion of voluntary committed cost sharing is prohibited.

Indirect Cost (F&A) Limitations:

Recovery of indirect costs (F&A) is prohibited on Participant Support Costs in REU Site proposals and requests for REU Supplements.

Other Budgetary Limitations:

For summer REU projects, the total budget request — including all direct costs and indirect costs — is generally expected not to exceed $1,550 per student per week. (The budget request for an academic-year REU project should be comparable on a pro rata basis.) However, projects that involve exceptional circumstances, such as international activities, field work in remote locations, a Research Experience for Teachers (RET) component, etc., may exceed this limit.

D. Research.gov/Grants.gov Requirements

For Proposals Submitted Via Research.gov:

To prepare and submit a proposal via Research.gov, see detailed technical instructions available at: https://www.research.gov/research-portal/appmanager/base/desktop?_nfpb=true&_pageLabel=research_node_display&_nodePath=/researchGov/Service/Desktop/ProposalPreparationandSubmission.html . For Research.gov user support, call the Research.gov Help Desk at 1-800-673-6188 or e-mail [email protected] . The Research.gov Help Desk answers general technical questions related to the use of the Research.gov system. Specific questions related to this program solicitation should be referred to the NSF program staff contact(s) listed in Section VIII of this funding opportunity.

For Proposals Submitted Via Grants.gov:

Before using Grants.gov for the first time, each organization must register to create an institutional profile. Once registered, the applicant's organization can then apply for any federal grant on the Grants.gov website. Comprehensive information about using Grants.gov is available on the Grants.gov Applicant Resources webpage: https://www.grants.gov/web/grants/applicants.html . In addition, the NSF Grants.gov Application Guide (see link in Section V.A) provides instructions regarding the technical preparation of proposals via Grants.gov. For Grants.gov user support, contact the Grants.gov Contact Center at 1-800-518-4726 or by email: [email protected] . The Grants.gov Contact Center answers general technical questions related to the use of Grants.gov. Specific questions related to this program solicitation should be referred to the NSF program staff contact(s) listed in Section VIII of this solicitation. Submitting the Proposal: Once all documents have been completed, the Authorized Organizational Representative (AOR) must submit the application to Grants.gov and verify the desired funding opportunity and agency to which the application is submitted. The AOR must then sign and submit the application to Grants.gov. The completed application will be transferred to Research.gov for further processing.

Proposers that submitted via Research.gov may use Research.gov to verify the status of their submission to NSF. For proposers that submitted via Grants.gov, until an application has been received and validated by NSF, the Authorized Organizational Representative may check the status of an application on Grants.gov. After proposers have received an e-mail notification from NSF, Research.gov should be used to check the status of an application.

VI. NSF Proposal Processing And Review Procedures

Proposals received by NSF are assigned to the appropriate NSF program for acknowledgement and, if they meet NSF requirements, for review. All proposals are carefully reviewed by a scientist, engineer, or educator serving as an NSF Program Officer, and usually by three to ten other persons outside NSF either as ad hoc reviewers, panelists, or both, who are experts in the particular fields represented by the proposal. These reviewers are selected by Program Officers charged with oversight of the review process. Proposers are invited to suggest names of persons they believe are especially well qualified to review the proposal and/or persons they would prefer not review the proposal. These suggestions may serve as one source in the reviewer selection process at the Program Officer's discretion. Submission of such names, however, is optional. Care is taken to ensure that reviewers have no conflicts of interest with the proposal. In addition, Program Officers may obtain comments from site visits before recommending final action on proposals. Senior NSF staff further review recommendations for awards. A flowchart that depicts the entire NSF proposal and award process (and associated timeline) is included in PAPPG Exhibit III-1.

A comprehensive description of the Foundation's merit review process is available on the NSF website at: https://www.nsf.gov/bfa/dias/policy/merit_review/ .

Proposers should also be aware of core strategies that are essential to the fulfillment of NSF's mission, as articulated in Leading the World in Discovery and Innovation, STEM Talent Development and the Delivery of Benefits from Research - NSF Strategic Plan for Fiscal Years (FY) 2022 - 2026 . These strategies are integrated in the program planning and implementation process, of which proposal review is one part. NSF's mission is particularly well-implemented through the integration of research and education and broadening participation in NSF programs, projects, and activities.

One of the strategic objectives in support of NSF's mission is to foster integration of research and education through the programs, projects, and activities it supports at academic and research institutions. These institutions must recruit, train, and prepare a diverse STEM workforce to advance the frontiers of science and participate in the U.S. technology-based economy. NSF's contribution to the national innovation ecosystem is to provide cutting-edge research under the guidance of the Nation's most creative scientists and engineers. NSF also supports development of a strong science, technology, engineering, and mathematics (STEM) workforce by investing in building the knowledge that informs improvements in STEM teaching and learning.

NSF's mission calls for the broadening of opportunities and expanding participation of groups, institutions, and geographic regions that are underrepresented in STEM disciplines, which is essential to the health and vitality of science and engineering. NSF is committed to this principle of diversity and deems it central to the programs, projects, and activities it considers and supports.

A. Merit Review Principles and Criteria

The National Science Foundation strives to invest in a robust and diverse portfolio of projects that creates new knowledge and enables breakthroughs in understanding across all areas of science and engineering research and education. To identify which projects to support, NSF relies on a merit review process that incorporates consideration of both the technical aspects of a proposed project and its potential to contribute more broadly to advancing NSF's mission "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes." NSF makes every effort to conduct a fair, competitive, transparent merit review process for the selection of projects.

1. Merit Review Principles

These principles are to be given due diligence by PIs and organizations when preparing proposals and managing projects, by reviewers when reading and evaluating proposals, and by NSF program staff when determining whether or not to recommend proposals for funding and while overseeing awards. Given that NSF is the primary federal agency charged with nurturing and supporting excellence in basic research and education, the following three principles apply:

  • All NSF projects should be of the highest quality and have the potential to advance, if not transform, the frontiers of knowledge.
  • NSF projects, in the aggregate, should contribute more broadly to achieving societal goals. These "Broader Impacts" may be accomplished through the research itself, through activities that are directly related to specific research projects, or through activities that are supported by, but are complementary to, the project. The project activities may be based on previously established and/or innovative methods and approaches, but in either case must be well justified.
  • Meaningful assessment and evaluation of NSF funded projects should be based on appropriate metrics, keeping in mind the likely correlation between the effect of broader impacts and the resources provided to implement projects. If the size of the activity is limited, evaluation of that activity in isolation is not likely to be meaningful. Thus, assessing the effectiveness of these activities may best be done at a higher, more aggregated, level than the individual project.

With respect to the third principle, even if assessment of Broader Impacts outcomes for particular projects is done at an aggregated level, PIs are expected to be accountable for carrying out the activities described in the funded project. Thus, individual projects should include clearly stated goals, specific descriptions of the activities that the PI intends to do, and a plan in place to document the outputs of those activities.

These three merit review principles provide the basis for the merit review criteria, as well as a context within which the users of the criteria can better understand their intent.

2. Merit Review Criteria

All NSF proposals are evaluated through use of the two National Science Board approved merit review criteria. In some instances, however, NSF will employ additional criteria as required to highlight the specific objectives of certain programs and activities.

The two merit review criteria are listed below. Both criteria are to be given full consideration during the review and decision-making processes; each criterion is necessary but neither, by itself, is sufficient. Therefore, proposers must fully address both criteria. (PAPPG Chapter II.D.2.d(i). contains additional information for use by proposers in development of the Project Description section of the proposal). Reviewers are strongly encouraged to review the criteria, including PAPPG Chapter II.D.2.d(i), prior to the review of a proposal.

When evaluating NSF proposals, reviewers will be asked to consider what the proposers want to do, why they want to do it, how they plan to do it, how they will know if they succeed, and what benefits could accrue if the project is successful. These issues apply both to the technical aspects of the proposal and the way in which the project may make broader contributions. To that end, reviewers will be asked to evaluate all proposals against two criteria:

  • Intellectual Merit: The Intellectual Merit criterion encompasses the potential to advance knowledge; and
  • Broader Impacts: The Broader Impacts criterion encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes.

The following elements should be considered in the review for both criteria:

  • Advance knowledge and understanding within its own field or across different fields (Intellectual Merit); and
  • Benefit society or advance desired societal outcomes (Broader Impacts)?
  • To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?
  • Is the plan for carrying out the proposed activities well-reasoned, well-organized, and based on a sound rationale? Does the plan incorporate a mechanism to assess success?
  • How well qualified is the individual, team, or organization to conduct the proposed activities?
  • Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?

Broader impacts may be accomplished through the research itself, through the activities that are directly related to specific research projects, or through activities that are supported by, but are complementary to, the project. NSF values the advancement of scientific knowledge and activities that contribute to achievement of societally relevant outcomes. Such outcomes include, but are not limited to: full participation of women, persons with disabilities, and other underrepresented groups in science, technology, engineering, and mathematics (STEM); improved STEM education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society; development of a diverse, globally competitive STEM workforce; increased partnerships between academia, industry, and others; improved national security; increased economic competitiveness of the United States; and enhanced infrastructure for research and education.

Proposers are reminded that reviewers will also be asked to review the Data Management Plan and the Postdoctoral Researcher Mentoring Plan, as appropriate.

Additional Solicitation Specific Review Criteria

Reviewers will be asked to interpret the two basic NSF review criteria in the context of the REU program. In addition, they will be asked to place emphasis on the following considerations:

  • Appropriateness and value of the research and professional development experience for the student participants, particularly the appropriateness of the research project(s) for undergraduate involvement and the nature of the students' participation in these activities.
  • Quality of the research environment, including the facilities, the preparedness of the research mentor(s) to guide undergraduate research, and the professional development opportunities for the students.
  • Appropriateness of the student recruitment and selection plans, including plans for conducting outreach, awareness, and recruitment of applicants from underrepresented groups, from outside the host institution, and from academic institutions with limited research opportunities in STEM.
  • Quality of plans for student preparation and for follow-through designed to promote continuation of student interest and involvement in research.
  • Appropriateness and cost-effectiveness of the budget, effectiveness of the plans for managing the project and evaluating the outcomes, and commitment of partners, if relevant.
  • For renewals of previously funded REU Sites: effectiveness of the previous Site.

B. Review and Selection Process

Proposals submitted in response to this program solicitation will be reviewed by Ad hoc Review and/or Panel Review.

Reviewers will be asked to evaluate proposals using two National Science Board approved merit review criteria and, if applicable, additional program specific criteria. A summary rating and accompanying narrative will generally be completed and submitted by each reviewer and/or panel. The Program Officer assigned to manage the proposal's review will consider the advice of reviewers and will formulate a recommendation.

After scientific, technical and programmatic review and consideration of appropriate factors, the NSF Program Officer recommends to the cognizant Division Director whether the proposal should be declined or recommended for award. NSF strives to be able to tell applicants whether their proposals have been declined or recommended for funding within six months. Large or particularly complex proposals or proposals from new awardees may require additional review and processing time. The time interval begins on the deadline or target date, or receipt date, whichever is later. The interval ends when the Division Director acts upon the Program Officer's recommendation.

After programmatic approval has been obtained, the proposals recommended for funding will be forwarded to the Division of Grants and Agreements or the Division of Acquisition and Cooperative Support for review of business, financial, and policy implications. After an administrative review has occurred, Grants and Agreements Officers perform the processing and issuance of a grant or other agreement. Proposers are cautioned that only a Grants and Agreements Officer may make commitments, obligations or awards on behalf of NSF or authorize the expenditure of funds. No commitment on the part of NSF should be inferred from technical or budgetary discussions with a NSF Program Officer. A Principal Investigator or organization that makes financial or personnel commitments in the absence of a grant or cooperative agreement signed by the NSF Grants and Agreements Officer does so at their own risk.

Once an award or declination decision has been made, Principal Investigators are provided feedback about their proposals. In all cases, reviews are treated as confidential documents. Verbatim copies of reviews, excluding the names of the reviewers or any reviewer-identifying information, are sent to the Principal Investigator/Project Director by the Program Officer. In addition, the proposer will receive an explanation of the decision to award or decline funding.

VII. Award Administration Information

A. notification of the award.

Notification of the award is made to the submitting organization by an NSF Grants and Agreements Officer. Organizations whose proposals are declined will be advised as promptly as possible by the cognizant NSF Program administering the program. Verbatim copies of reviews, not including the identity of the reviewer, will be provided automatically to the Principal Investigator. (See Section VI.B. for additional information on the review process.)

B. Award Conditions

An NSF award consists of: (1) the award notice, which includes any special provisions applicable to the award and any numbered amendments thereto; (2) the budget, which indicates the amounts, by categories of expense, on which NSF has based its support (or otherwise communicates any specific approvals or disapprovals of proposed expenditures); (3) the proposal referenced in the award notice; (4) the applicable award conditions, such as Grant General Conditions (GC-1)*; or Research Terms and Conditions* and (5) any announcement or other NSF issuance that may be incorporated by reference in the award notice. Cooperative agreements also are administered in accordance with NSF Cooperative Agreement Financial and Administrative Terms and Conditions (CA-FATC) and the applicable Programmatic Terms and Conditions. NSF awards are electronically signed by an NSF Grants and Agreements Officer and transmitted electronically to the organization via e-mail.

*These documents may be accessed electronically on NSF's Website at https://www.nsf.gov/awards/managing/award_conditions.jsp?org=NSF . Paper copies may be obtained from the NSF Publications Clearinghouse, telephone (703) 292-8134 or by e-mail from [email protected] .

More comprehensive information on NSF Award Conditions and other important information on the administration of NSF awards is contained in the NSF Proposal & Award Policies & Procedures Guide (PAPPG) Chapter VII, available electronically on the NSF Website at https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .

Administrative and National Policy Requirements

Build America, Buy America

As expressed in Executive Order 14005, Ensuring the Future is Made in All of America by All of America's Workers (86 FR 7475), it is the policy of the executive branch to use terms and conditions of Federal financial assistance awards to maximize, consistent with law, the use of goods, products, and materials produced in, and services offered in, the United States.

Consistent with the requirements of the Build America, Buy America Act (Pub. L. 117-58, Division G, Title IX, Subtitle A, November 15, 2021), no funding made available through this funding opportunity may be obligated for an award unless all iron, steel, manufactured products, and construction materials used in the project are produced in the United States. For additional information, visit NSF's Build America, Buy America webpage.

C. Reporting Requirements

For all multi-year grants (including both standard and continuing grants), the Principal Investigator must submit an annual project report to the cognizant Program Officer no later than 90 days prior to the end of the current budget period. (Some programs or awards require submission of more frequent project reports). No later than 120 days following expiration of a grant, the PI also is required to submit a final project report, and a project outcomes report for the general public.

Failure to provide the required annual or final project reports, or the project outcomes report, will delay NSF review and processing of any future funding increments as well as any pending proposals for all identified PIs and co-PIs on a given award. PIs should examine the formats of the required reports in advance to assure availability of required data.

PIs are required to use NSF's electronic project-reporting system, available through Research.gov, for preparation and submission of annual and final project reports. Such reports provide information on accomplishments, project participants (individual and organizational), publications, and other specific products and impacts of the project. Submission of the report via Research.gov constitutes certification by the PI that the contents of the report are accurate and complete. The project outcomes report also must be prepared and submitted using Research.gov. This report serves as a brief summary, prepared specifically for the public, of the nature and outcomes of the project. This report will be posted on the NSF website exactly as it is submitted by the PI.

More comprehensive information on NSF Reporting Requirements and other important information on the administration of NSF awards is contained in the NSF Proposal & Award Policies & Procedures Guide (PAPPG) Chapter VII, available electronically on the NSF Website at https://www.nsf.gov/publications/pub_summ.jsp?ods_key=pappg .

The NSF Education & Training Application (ETAP) is a customizable common application system that connects individuals (such as students and teachers) with NSF-funded education and training opportunities and collects high-quality data from both applicants and participants in NSF-funded opportunities. It was initially developed to serve the REU Sites program but now serves multiple programs, and its use is growing. All investigators with REU Site awards or REU Supplement awards are welcome to use ETAP, which offers benefits to the PIs, the students, and NSF. Some NSF units require their REU Sites to use ETAP to manage student applications and collect student demographic information. When use of ETAP is required, it will be indicated in the award notice for the REU Site. Prospective PIs may find out whether specific NSF units require use of ETAP by consulting the discipline-specific REU web pages (for units that have them) or by contacting the program officers listed on the NSF REU Site Contacts web page .

PIs are required to provide the names and other basic information about REU student participants as part of annual and final project reports. In particular, in the report, each REU student who is supported with NSF REU funds must be identified as an "REU Participant," and the PI must provide the student's home institution and year of schooling completed (sophomore, junior, etc.). The REU students (like all participants listed in project reports) will receive an automated request from Research.gov to self-report their demographic information. PIs of REU Sites may also be required to provide additional information that enables NSF to track students beyond the period of their participation in the Site. For PIs who use NSF's ETAP to receive REU applications, that system collects, and provides reports on, the demographic information and other characteristics of both applicants and participants, and it will support efforts in longitudinal tracking.

REU Site awardees are expected to establish a website for the recruitment of students and dissemination of information about the REU Site and to maintain the website for the duration of the award. PIs are required to furnish the URL for the website to the cognizant NSF program officer no later than 90 days after receiving notification of the award.

VIII. Agency Contacts

Please note that the program contact information is current at the time of publishing. See program website for any updates to the points of contact.

General inquiries regarding this program should be made to:

For questions related to the use of NSF systems contact:

For questions relating to Grants.gov contact:

  • Grants.gov Contact Center: If the Authorized Organizational Representatives (AOR) has not received a confirmation message from Grants.gov within 48 hours of submission of application, please contact via telephone: 1-800-518-4726; e-mail: [email protected] .

IX. Other Information

The NSF website provides the most comprehensive source of information on NSF Directorates (including contact information), programs and funding opportunities. Use of this website by potential proposers is strongly encouraged. In addition, "NSF Update" is an information-delivery system designed to keep potential proposers and other interested parties apprised of new NSF funding opportunities and publications, important changes in proposal and award policies and procedures, and upcoming NSF Grants Conferences . Subscribers are informed through e-mail or the user's Web browser each time new publications are issued that match their identified interests. "NSF Update" also is available on NSF's website .

Grants.gov provides an additional electronic capability to search for Federal government-wide grant opportunities. NSF funding opportunities may be accessed via this mechanism. Further information on Grants.gov may be obtained at https://www.grants.gov .

Some NSF directorates/offices/divisions that manage REU Site proposals post discipline-specific REU web pages or fund an awardee to host a website providing information for the community of REU awardees in the discipline. These discipline-specific websites are listed, along with the NSF REU point of contact for each discipline, on the web page at https://www.nsf.gov/crssprgm/reu/reu_contacts.jsp . The following resources, which summarize research on the impact of undergraduate research experiences, could be helpful to investigators as they are designing those experiences and considering approaches to evaluating them: Brownell, Jayne E., and Lynn E. Swaner. Five High-Impact Practices: Research on Learning, Outcomes, Completion, and Quality ; Chapter 4: "Undergraduate Research." Washington, DC: Association of American Colleges and Universities, 2010. Reviews published research on the effectiveness and outcomes of undergraduate research. Laursen, Sandra, et al. Undergraduate Research in the Sciences: Engaging Students in Real Science . San Francisco: Jossey-Bass, 2010. Examines the benefits of undergraduate research, and provides advice for designing and evaluating the experiences. Linn, Marcia C., Erin Palmer, Anne Baranger, Elizabeth Gerard, and Elisa Stone. "Undergraduate Research Experiences: Impacts and Opportunities." Science , Vol. 347, Issue 6222 (6 February 2015); DOI: 10.1126/science.1261757 . Comprehensively examines the literature on the impacts of undergraduate research experiences, and identifies the gaps in knowledge and the opportunities for more rigorous research and assessment. Lopatto, David. Science in Solution: The Impact of Undergraduate Research on Student Learning . Tucson, AZ: Research Corporation for Science Advancement, 2009. Findings from the author's pioneering surveys exploring the benefits of undergraduate research. National Academies of Sciences, Engineering, and Medicine. Undergraduate Research Experiences for STEM Students: Successes, Challenges, and Opportunities . Washington, DC: The National Academies Press, 2017; DOI: 10.17226/24622 . NSF-commissioned study that takes stock of what is known, and not known, about undergraduate research experiences and describes practices and research that faculty can apply to improve the experiences for students. Russell, Susan H., Mary P. Hancock, and James McCullough. "Benefits of Undergraduate Research Experiences." Science , Vol. 316, Issue 5824 (27 April 2007); DOI: 10.1126/science.1140384 . Summary of a large-scale, NSF-funded evaluation of undergraduate research opportunities, conducted by SRI International between 2002 and 2006. The study included REU Sites, REU Supplements, and undergraduate research opportunities sponsored by a range of other NSF programs. Several additional resources offer practical help for designing particular components of REU projects: Online Ethics Center for Engineering and Science . Information, references, and case studies for exploring ethics in engineering and science and designing training on the responsible and ethical conduct of research. Center for the Improvement of Mentored Experiences in Research (CIMER). Publications and online resources, including an assessment platform, focusing on effective mentoring of beginning researchers. EvaluateUR . A service (available through subscription) for evaluating independent student research. Undergraduate Research Student Self-Assessment (URSSA). Online survey instrument for use in evaluating student outcomes of undergraduate research experiences. (Most REU Sites in the Biological Sciences use a version of this tool. See https://bioreu.org/resources/assessment-and-evaluation/ .) Although some of the resources above were partially developed with NSF funding, the list is not meant to imply an NSF recommendation, and the list is not meant to be exhaustive. Some NSF programs that support centers and facilities encourage the inclusion of REU activities as one component of those large projects; see the individual solicitations for details. Other NSF funding opportunities, such as the following, focus on providing structured research experiences similar to those supported by the REU program: Directorate of Geosciences - Veterans Education and Training Supplement (GEO-VETS) Opportunity Geoscience Research Experiences for Post-Baccalaureate Students (GEO-REPS) Supplement Opportunity High School Student Research Assistantships (MPS-High): Funding to Broaden Participation in the Mathematical and Physical Sciences International Research Experiences for Students (IRES) Post-Associate and Post-Baccalaureate Research Experiences for LSAMP Students (PRELS) Supplement Opportunity Research and Mentoring for Postbaccalaureates in Biological Sciences (RaMP) Research Assistantships for High School Students (RAHSS): Funding to Broaden Participation in the Biological Sciences Research Experience for Teachers (RET) Supplement Opportunity: Directorate for Biological Sciences Research Experiences for Teachers (RET) in Engineering and Computer Science Research Training Groups in the Mathematical Sciences (RTG) Veterans Research Supplement (VRS) Program: Directorate for Engineering As funding opportunities are added or expire, the above list will not remain current. Visit the NSF website ( https://new.nsf.gov/funding/opportunities ) for up-to-date information.

About The National Science Foundation

The National Science Foundation (NSF) is an independent Federal agency created by the National Science Foundation Act of 1950, as amended (42 USC 1861-75). The Act states the purpose of the NSF is "to promote the progress of science; [and] to advance the national health, prosperity, and welfare by supporting research and education in all fields of science and engineering."

NSF funds research and education in most fields of science and engineering. It does this through grants and cooperative agreements to more than 2,000 colleges, universities, K-12 school systems, businesses, informal science organizations and other research organizations throughout the US. The Foundation accounts for about one-fourth of Federal support to academic institutions for basic research.

NSF receives approximately 55,000 proposals each year for research, education and training projects, of which approximately 11,000 are funded. In addition, the Foundation receives several thousand applications for graduate and postdoctoral fellowships. The agency operates no laboratories itself but does support National Research Centers, user facilities, certain oceanographic vessels and Arctic and Antarctic research stations. The Foundation also supports cooperative research between universities and industry, US participation in international scientific and engineering efforts, and educational activities at every academic level.

Facilitation Awards for Scientists and Engineers with Disabilities (FASED) provide funding for special assistance or equipment to enable persons with disabilities to work on NSF-supported projects. See the NSF Proposal & Award Policies & Procedures Guide Chapter II.F.7 for instructions regarding preparation of these types of proposals.

The National Science Foundation has Telephonic Device for the Deaf (TDD) and Federal Information Relay Service (FIRS) capabilities that enable individuals with hearing impairments to communicate with the Foundation about NSF programs, employment or general information. TDD may be accessed at (703) 292-5090 and (800) 281-8749, FIRS at (800) 877-8339.

The National Science Foundation Information Center may be reached at (703) 292-5111.

The National Science Foundation promotes and advances scientific progress in the United States by competitively awarding grants and cooperative agreements for research and education in the sciences, mathematics, and engineering.

To get the latest information about program deadlines, to download copies of NSF publications, and to access abstracts of awards, visit the NSF Website at

2415 Eisenhower Avenue, Alexandria, VA 22314

(NSF Information Center)

(703) 292-5111

(703) 292-5090

 

Send an e-mail to:

or telephone:

(703) 292-8134

(703) 292-5111

Privacy Act And Public Burden Statements

The information requested on proposal forms and project reports is solicited under the authority of the National Science Foundation Act of 1950, as amended. The information on proposal forms will be used in connection with the selection of qualified proposals; and project reports submitted by awardees will be used for program evaluation and reporting within the Executive Branch and to Congress. The information requested may be disclosed to qualified reviewers and staff assistants as part of the proposal review process; to proposer institutions/grantees to provide or obtain data regarding the proposal review process, award decisions, or the administration of awards; to government contractors, experts, volunteers and researchers and educators as necessary to complete assigned work; to other government agencies or other entities needing information regarding applicants or nominees as part of a joint application review process, or in order to coordinate programs or policy; and to another Federal agency, court, or party in a court or Federal administrative proceeding if the government is a party. Information about Principal Investigators may be added to the Reviewer file and used to select potential candidates to serve as peer reviewers or advisory committee members. See System of Record Notices , NSF-50 , "Principal Investigator/Proposal File and Associated Records," and NSF-51 , "Reviewer/Proposal File and Associated Records." Submission of the information is voluntary. Failure to provide full and complete information, however, may reduce the possibility of receiving an award.

An agency may not conduct or sponsor, and a person is not required to respond to, an information collection unless it displays a valid Office of Management and Budget (OMB) control number. The OMB control number for this collection is 3145-0058. Public reporting burden for this collection of information is estimated to average 120 hours per response, including the time for reviewing instructions. Send comments regarding the burden estimate and any other aspect of this collection of information, including suggestions for reducing this burden, to:

Suzanne H. Plimpton Reports Clearance Officer Policy Office, Division of Institution and Award Support Office of Budget, Finance, and Award Management National Science Foundation Alexandria, VA 22314

National Science Foundation

  • Open access
  • Published: 09 September 2024

Utilizing CT imaging for evaluating late gastrointestinal tract side effects of radiotherapy in uterine cervical cancer: a risk regression analysis

  • Pooriwat Muangwong 1 ,
  • Nutthita Prukvaraporn 2 ,
  • Kittikun Kittidachanan 1 ,
  • Nattharika Watthanayuenyong 2 ,
  • Imjai Chitapanarux 1 &
  • Wittanee Na Chiangmai 2  

BMC Medical Imaging volume  24 , Article number:  235 ( 2024 ) Cite this article

23 Accesses

Metrics details

Radiotherapy (RT) is effective for cervical cancer but causes late side effects (SE) to nearby organs. These late SE occur more than 3 months after RT and are rated by clinical findings to determine their severity. While imaging studies describe late gastrointestinal (GI) SE, none demonstrate the correlation between the findings and the toxicity grading. In this study, we demonstrated the late GI toxicity prevalence, CT findings, and their correlation.

We retrospectively studied uterine cervical cancer patients treated with RT between 2015 and 2018. Patient characteristics and treatment(s) were obtained from the hospital’s databases. Late RTOG/EORTC GI SE and CT images were obtained during the follow-up. Post-RT GI changes were reviewed from CT images using pre-defined criteria. Risk ratios (RR) were calculated for CT findings, and multivariable log binomial regression determined adjusted RRs.

This study included 153 patients, with a median age of 57 years (IQR 49–65). The prevalence of ≥ grade 2 RTOG/EORTC late GI SE was 33 (27.5%). CT findings showed 91 patients (59.48%) with enhanced bowel wall (BW) thickening, 3 (1.96%) with bowel obstruction, 7 (4.58%) with bowel perforation, 6 (3.92%) with fistula, 0 (0%) with bowel ischemia, and 0 (0%) with GI bleeding. Adjusted RRs showed that enhanced BW thickening (RR 9.77, 95% CI 2.64–36.07, p  = 0.001), bowel obstruction (RR 5.05, 95% CI 2.30–11.09, p  < 0.001), and bowel perforation (RR 3.82, 95% CI 1.96–7.44, p  < 0.001) associated with higher late GI toxicity grades.

Conclusions

Our study shows CT findings correlate with grade 2–4 late GI toxicity. Future research should validate and refine these findings with different imaging and toxicity grading systems to assess their potential predictive value.

Peer Review reports

Introduction

Radiotherapy (RT) stands as a common and effective approach for treating uterine cervical cancer. It serves as both a post-surgery option for patients with unfavorable pathological characteristics and as a primary treatment [ 1 , 2 , 3 ]. Despite advancements in radiotherapy that enable precise targeting of radiation to specific areas, nearby healthy organs inevitably receive some portion of the radiation dose, leading to side effects that affect these neighboring organs [ 4 , 5 , 6 ].

Late side effects of RT refer to the consequences as a result of radiation therapy that occur more than three months after irradiation [ 7 ]. These consequences are primarily attributed to ischemia and fibrotic alterations of normal organs [ 8 ]. In the gastrointestinal (GI) system, a spectrum of toxicities arises, spanning from mild forms like enteritis, intestinal wall fibrosis, and telangiectasia to severe manifestations including ulcers, hemorrhages, strictures, fistulas, and perforations. Clinical manifestations can vary and encompass symptoms such as abdominal pain, diarrhea, nausea, vomiting, flatulence, weight loss, and bowel obstruction [ 4 , 5 , 6 , 9 , 10 , 11 ]. The assessment of organ toxicity severity typically relies on the evaluation of patient symptoms, clinical measurements, and therapy interventions [ 12 , 13 , 14 ].

Imaging is important for evaluating late GI toxicity [ 6 , 15 ]. Several studies have demonstrated image-related alterations in GI organs receiving radiotherapy. These image findings include bowel wall thickening, strictures, tethering, small bowel obstruction, perforation, and fistula formation, all of which can be identified in patients following radiotherapy [ 16 , 17 , 18 , 19 , 20 ].

In this context, our study explores the potential utility of CT findings as indicators for predicting late grade 2–4 GI toxicity in patients with cervical cancer treated with RT. By examining the prevalence of late GI side effects, the occurrence of CT findings associated with GI toxicities, and the correlation between these findings and late GI side effects, we aim to offer an understanding of the role of imaging in assessing the late GI side effect of radiotherapy. Through this investigation, we aim to contribute insights into the potential integration of CT findings as a supplement to conventional clinical evaluations in determining treatment-related toxicities.

Materials and methods

A retrospective observational cohort study was undertaken to examine the correlation between CT findings and GI late adverse effects in patients with uterine cervical cancer who underwent radiotherapy at Maharaj Nakorn Chiang Mai Hospital in Thailand from January 2015 to December 2018. The inclusion criteria were: (1) a confirmed histological diagnosis of uterine cervical cancer at FIGO 2018 stages IA1-IVA, excluding small cell carcinoma, malignant melanoma, and cervical sarcoma; (2) treatment with radiotherapy (RT) using conventional doses per fraction of external beam RT, with or without brachytherapy, following surgery or as definitive curative treatment; (3) a minimum follow-up period of three months post-RT; and (4) availability of at least one CT image captured no less than three months after RT.

Baseline patient characteristics, treatment details, and grading of late GI tract toxicity were obtained from the radiation oncology database and hospital medical records, using the RTOG/EORTC late toxicity criteria. CT images were retrieved from the hospital’s Picture Archiving and Communication System (PACS). The FIGO staging was updated to reflect the 2018 FIGO staging classification.

This study adhered to the principles of the Helsinki Declaration and was granted approval by our institute’s Ethical Committee under number 499/2021.

RT, chemotherapy, and follow-up

For definitive RT, 50 Gy (Gy) of whole pelvic RT (WPRT) was prescribed. In the cases of paraaortic lymph node or tumor involvement of the lower one-third of the vagina, RT fields were extended to include the paraaortic lymph node (PAN) area or bilateral inguinal lymph node area, respectively. In the final week of external-beam RT, a four-session brachytherapy boost of 7 Gy per session was initiated.

In the postoperative setting, 50 Gy of WPRT was prescribed. Brachytherapy was administered to patients with a positive vaginal margin.

Either weekly cisplatin 40 mg/m 2 or weekly carboplatin AUC2 was administered concurrently with RT in patients with FIGO stages IB3, IIA2, IIB, IIIC1, and IIIC2 receiving definitive RT, as well as those who had undergone surgery and had positive surgical margins, lymph node metastases, or parametrial invasion.

Following the completion the treatment, patients were evaluated for clinical response though per vaginal examination and treatment toxicities were assessed according to RTOG/EORTC late toxicity criteria [ 13 ]. Evaluations were conducted every 3 months for the first year, every 4 months for the second year, every 6 months for the next 2 years, and then annually. The following criteria were used to evaluate late GI toxicity during the follow-up: grade 0 – none; grade 1 – mild diarrhea, mild cramping, bowel movement 5 times daily, slight rectal discharge or bleeding; grade2 – moderate diarrhea and colic, bowel movement > 5 times daily, excessive rectal mucus or intermittent bleeding; grade 3 – obstruction or bleeding, requiring surgery; grade 4 – necrosis / perforation fistula; and grade 5 – death related to radiation late effects.

Within the framework of this study, late GI toxicity was categorized into two groups for analysis: grade 0–1 group and grade 2–5 group.

CT image assessment

CT images of the pelvis or the whole abdomen were used to assess tumor response in patients with initial pelvic or paraaortic nodal metastasis, as well as to evaluate those suspected of having recurrent or persistent disease. Additionally, it is employed to assess the toxicity of radiotherapy in individuals showing symptoms.

All CT scans were carried out with multidetector CT scanners and intravenous contrast media. The axial images of abdomen and pelvic cavity in the portal venous phase were performed after injection of 100–150 ml of iodinated contrast media (320–350 mg of iodine per milliliter) with flow rate of 3–5 ml/sec. Axial images were reconstructed at 2-mm and 5-mm thickness. Multiplanar reconstruction comprising coronal and sagittal images were created at a 3-mm thickness.

For this study, CT image acquisition within one month of the clinical evaluation of late toxicities in follow-up assessments was selected. When multiple CT images were available, we chose to evaluate the most recent scan that showed the highest grade of late GI toxicity.

An experienced radiologist with board certification and a trainee in their third year of a diagnostic radiology residency program jointly reviewed the axial CT images from the portovenous phase. They conducted the review in consensus and without access to clinical data, focusing on the CT findings that followed:

Enhanced bowel-wall thickening , defined as single wall thickness exceeding 3 mm in distended loops and exceeding 5 mm in collapsed loops [ 20 ] (Fig.  1 A and B).

Bowel obstruction was defined as upstream dilated bowel loops (greater than 2.5 cm in small bowel and greater than 6 cm in large bowel) with transitional point [ 20 ] (Fig.  1 C).

Bowel perforation was defined as bowel wall disruption along the mucosa to serosa or presence of pneumoperitoneum [ 18 ] (Fig.  1 D).

Fistula formation was defined as presence of connection between lumen of the bowel loops to the lumen of the adjacent organs such as another bowel loop, bladder, uterus, vaginal or skin [ 17 ] (Fig.  1 E).

Bowel ischemia was defined as transmural hyper-enhancement suggestive of early ischemia and hypo-enhancing or non-enhancing bowel wall suggestive of intermediate to late-stage bowel ischemia (Fig.  1 F).

(f) GI bleeding was defined as contrast extravasation into the intestinal lumen (Fig.  1 G).

figure 1

CT findings of radiation-induced late gastrointestinal toxicity. ( A ) Bowel wall thickening with target water bowel wall enhancement in distended bowel loop (arrow); ( B ) Bowel wall thickening with isoattenuation bowel wall enhancement in collapsed bowel loop (arrow); ( C ) Bowel obstruction; dilatation of the bowel loops [*] with transitional point (arrow); ( D ) Bowel wall disruption (arrow) in bowel perforation; ( E ) Sagittal CT shows fistula formation (arrow), connection between small bowel [@] and urinary bladder [#]; ( F ) Axial CT shows non-enhancing bowel wall (arrow) suggestive of intermediate to late-stage bowel ischemia. ( G ) Axial CT shows contrast extravasation into the rectal lumen (arrow)

Statistical analysis

Based our pivot data, we determined that the highest number of samples originated from cases of fistula formation in late GI toxicity in CT findings graded as 0–1 and 2–4, with prevalence of 2% and 10%, respectively. With a power of 0.8 and a significance level of 0.05, our study required a sample size of 138.

Patient characteristics, treatments, late GI toxicity, and CT findings were summarized using descriptive statistics. Quantitative data were presented as medians with interquartile ranges (IQR), while categorical data were expressed as numbers with corresponding percentages. To assess group differences, the Wilcoxon rank-sum test was employed for quantitative variables, while Fisher’s exact test was used for categorical variables. Risk ratios were computed for CT findings, and further risk ratios, adjusted for patient age, chemotherapy regimen, radiotherapy technique, treatment fields, brachytherapy, histology, and treatment objective, were determined using a multivariable log binomial regression with a Poisson working model. Statistical significance was set at p  < 0.05. All analyses were conducted using STATA software version 16 (Stata Corp LLC, Texas, USA).

This study included 153 eligible patients with a median age of 57 years (IQR 49–65). The most prevalent tumor stages were IIB (51 patients, 33.33%), IIIB (45 patients, 29.41%), and IIIC2 (19 patients, 11.11%). Radiation techniques consisted of 84 cases of conventional (54.90%), 39 cases of three-dimensional conformal RT (3D-CRT) (25.49%), and 30 cases of intensity modulated radiation therapy (IMRT) (19.61%). The radiation fields encompassed WPRT alone in 124 cases (81.05%), WPRT with PAN in 16 cases (10.46%), WPRT with inguinal area in 10 cases (6.54%), and WPRT with both PAN and inguinal area in 3 cases (1.96%). Brachytherapy was administered to 136 patients (88.89%). Chemotherapy was administered to 127 patients (82.81%), consisting of cisplatin in 121 patients and carboplatin in 5 patients. The treatment setting was definitive for 140 (91.50%) patients and post-operative for 13 (8.50%) patients. Except for brachytherapy, patient characteristics and treatments were comparable between the RTOG/EORTC late GI toxicity grade 0–1 group and the grade 2–4 group. (Table  1 )

The incidence of RTOG/EORTC late GI toxicity grade 0 was observed in 110 patients (71.90%), while grades 1, 2, 3, and 4 toxicities were reported in 10 (6.54%), 13 (8.50%), 14 (9.15%), and 6 (3.92%) patients, respectively. No grade 5 toxicities were recorded.

CT findings revealed that out of the total number of 153 patients, 91 patients (59.48%) had enhanced thickened bowel walls, 3 (1.96%) had bowel obstruction, 7 (4.58%) had bowel perforation, 6 (3.92%) had fistula, 0 (0%) had bowel ischemia, and 0 (0%) had GI bleeding. A comparison of positive CT findings between the grade 0–1 and grade 2–4 toxicity groups is presented in Table  2 . The outcomes demonstrated significant differences between the two groups for enhanced bowel wall thickening, bowel obstruction, and bowel perforation, but not for fistula formation.

Table  3 shows the risk ratios of CT findings, excluding bowel ischemia and GI bleeding, as these did not yield any positive findings in this study. Risk ratios of all CT findings, except for fistula formation, were significant. These outcomes suggest a higher likelihoods of higher grade 2–4 late GI toxicity in cases with positive CT findings compared those with negative findings. After multivariable analysis, adjusting for variables including age, chemotherapy regimen, radiotherapy technique, treatment fields, brachytherapy, histology, and treatment objective, the results consistently indicated an elevated risk of grade 2–4 late GI toxicity in patients with positive CT findings across all categories, except for fistula formation.

While conventional grading systems primarily rely on patients’ reported symptoms and the treatments they receive to assess the severity of toxicities [ 12 , 13 , 14 ], our study revealed CT findings can also serve as an additional determinant for grade 2–4 toxicity. Specifically, our research highlighted that CT findings, namely enhanced thickened bowel walls, bowel obstruction, and bowel perforation were linked to more severe late GI toxicity. These CT findings help in determining severity of the GI toxicity.

In our study, we found that enhanced bowel wall thickening, bowel obstruction, and bowel perforation were the three CT findings significantly associated with a higher grade of late GI toxicity. Among these three findings, enhanced bowel wall thickening exhibited the most substantial impact in predicting grade 2–4 toxicity compared to those who had negative findings, demonstrating a relative risk (RR) of 10.56. This was followed by bowel obstruction, with an RR of 5.0, and bowel perforation, with an RR of 4.63. These outcomes remained consistent even after multivariate analysis, which adjusted for patient characteristics and treatment factors, yielding respective RRs of 9.77, 5.05, and 3.82.

Our study also unveiled that enhanced bowel wall thickening was the most prevalent finding, observed in over half of the patients, comprised of 50% in late GI toxicity grade 0–1 group and 93.94% in grade 2–4 group. This can be attributed to the pathophysiological alterations in the irradiated bowel wall, leading to increased collagen deposition and subsequent thickening and immobilization of the bowel loop [ 6 ]. However, even with lower prevalent findings of bowel obstruction and bowel perforation, these findings are more likely to prompt management suggesting clinically significant of these findings.

Our findings indicated that using fistula formation as an indicator for evaluating grade 2–4 toxicity yielded negative results. Additionally, we identified one patient with bowel obstruction who was classified in the toxicity grade 0–1 group. These results highlight the limitations of relying solely on clinical assessment for evaluating late GI toxicity. If these patients undergone both CT imaging and clinical evaluation, it becomes clear that their treatment related to CT findings might have resulted in a shift in their toxicity grading, potentially raising them to grade 3 or 4. These results emphasize the advantages of incorporating CT imaging into the follow-up process, rather than relying solely on clinical evaluation. This approach is in line with current guidelines that advocate for the inclusion of imaging during follow-up [ 2 , 3 ].

Despite the highest RR of 9.77 observed in cases of enhanced thickened bowel wall, which implies that patients with positive CT findings in this category are nearly ten times more likely to experience grade 2–4 late GI toxicity than those with negative findings, our findings revealed that half of the patients categorized under grade 0–1 toxicity exhibited positive findings. Given that prior research has highlighted the tendency for physician-reported toxicities to underestimate the true impact when compared to patient-reported outcomes [ 21 , 22 , 23 , 24 ], it becomes essential to place special emphasis on individuals presenting with an enhanced thickened bowel wall. Ensuring that these patients do not experience GI symptoms is of paramount importance, as any indication of symptoms should trigger prompt treatment [ 4 ].

To the best of our knowledge, our study is the first to demonstrate the correlation of CT findings and late grade 2–4 GI toxicity in cervical cancer. We used basic CT scan results and highlighted how each result can predict the later GI toxicity. This approach could become a regular component of patient care.

There were limitations in our study. Firstly, the retrospective nature of our study introduces potential biases and confounding. Secondly, our study exclusively utilized CT images and assessed late GI toxicity based solely on RTOG/EORTC late toxicity criteria, focusing only on cervical cancer. These factors may limit generalizability of our results to other imaging modalities, alternative grading systems, or other malignancies requiring pelvic irradiation, such as endometrial cancer, where treatment protocols differ and vary based on surgical pathology and molecular classification [ 25 , 26 ]. Thirdly, our study relied solely on binary outcomes derived from CT findings, potentially overlooking specific details within the findings.

Our study demonstrated the potential for incorporating CT findings into the late GI toxicity assessment for refining severity categorization beyond conventional grading systems. Integrating CT imaging into follow-up protocols could enhance the accuracy of late GI toxicity evaluation. Further investigations should explore alternative imaging modalities, such as CT enterography or MR enterography, and consider using alternative toxicity grading systems like CTCAE to validate our findings. Additionally, the study of radiomic features in conjunction with other malignancies requiring pelvic irradiation may provide advantages in finely assessing treatment toxicity. Prospective studies are essential to validate and enhance the robustness of our current findings.

Our study indicates that CT findings, particularly enhanced thickened bowel wall, bowel obstruction, and bowel perforation, are correlated with grade 2–4 late GI toxicity. While acknowledging the retrospective design and inherent limitations, this approach could enhance the assessment of treatment-related side effects. Further research incorporating different imaging modalities and toxicity grading systems is warranted to validate our findings and to assess their potential predictive capability.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request. The data are not publicly available due to containing information that could compromise research participant privacy.

Cibula D, Raspollini MR, Planchamp F, Centeno C, Chargari C, Felix A, et al. ESGO/ESTRO/ESP guidelines for the management of patients with cervical cancer - update 2023. Int J Gynecol Cancer. 2023;33:649–66.

Article   PubMed   PubMed Central   Google Scholar  

National Comprehensive Cancer Network. Cervical Cancer. (Version 1.2023). https://www.nccn.org/professionals/physician_gls/pdf/cervical.pdf . Accessed 24 Aug 2023.

Marth C, Landoni F, Mahner S, McCormack M, Gonzalez-Martin A, Colombo N. Cervical cancer: ESMO Clinical Practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2017;28:iv72–83.

Article   CAS   PubMed   Google Scholar  

Andreyev HJN. Pelvic radiation disease. Color Dis. 2015;17:2–6.

Article   CAS   Google Scholar  

McCaughan H, Boyle S, McGoran JJ. Update on the management of the gastrointestinal effects of radiation. World J Gastrointest Oncol. 2021;13:400–8.

Theis VS, Sripadam R, Ramani V, Lal S. Chronic Radiation Enteritis. Clin Oncol. 2010;22:70–83.

Frazzoni L, Marca M, La, Guido A, Morganti AG, Bazzoli F, Fuccio L. Pelvic radiation disease: updates on treatment options. World J Clin Oncol. 2015;6:272–80.

Stewart FA, Akleyev AV, Hauer-Jensen M, Hendry JH, Kleiman NJ, MacVittie TJ, et al. ICRP PUBLICATION 118: ICRP Statement on tissue reactions and early and late effects of Radiation in Normal tissues and organs — threshold doses for tissue reactions in a Radiation Protection Context. Ann ICRP. 2012;41:1–322.

Hasleton PS, Carr N, Schofield PF. Vascular changes in radiation bowel disease. Histopathology. 1985;9:517–34.

Kountouras J, Zavos C. Recent advances in the management of radiation colitis. World J Gastroenterol. 2008;14:7289.

Andreyev HJN. Gastrointestinal problems after pelvic radiotherapy: the past, the Present and the future. Clin Oncol. 2007;19:790–9.

U.S. Department of Health and Human Service. Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. 2017. https://ctep.cancer.gov/protocolDevelopment/electronic_applications/docs/CTCAE_v5_Quick_Reference_8.5x11.pdf . Accessed 16 Jun 2019.

Cox JD, Stetz JA, Pajak TF. Toxicity criteria of the Radiation Therapy Oncology Group (RTOG) and the European organization for research and treatment of cancer (EORTC). Int J Radiat Oncol Biol Phys. 1995;31:1341–6.

Lent soma scales for all anatomic sites. Int J Radiat Oncol. 1995;31:1049–91.

Henson CC, Davidson SE, Ang Y, Babbs C, Crampton J, Kelly M, et al. Structured gastroenterological intervention and improved outcome for patients with chronic gastrointestinal symptoms following pelvic radiotherapy. Support Care Cancer. 2013;21:2255–65.

Article   PubMed   Google Scholar  

Addley HC, Vargas HA, Moyle PL, Crawford R, Sala E. Pelvic imaging following chemotherapy and radiation therapy for gynecologic malignancies. Radiographics. 2010;30:1843–56.

Maturen KE, Feng MU, Wasnik AP, Azar SF, Appelman HD, Francis IR, et al. Imaging effects of radiation therapy in the abdomen and pelvis: evaluating innocent bystander tissues. Radiographics. 2013;33:599–619.

Sung HK, Sang SS, Yong YJ, Suk HH, Jin WK, Heoung KK. Gastrointestinal tract perforation: MDCT findings according to the perforation sites. Korean J Radiol. 2009;10:63–70.

Article   Google Scholar  

Wittenberg J, Harisinghani MG, Jhaveri K, Varghese J, Mueller PR. Algorithmic approach to CT diagnosis of the abnormal bowel wall. Radiographics. 2002;22:1093–107.

Viswanathan C, Bhosale P, Ganeshan DM, Truong MT, Silverman P, Balachandran A. Imaging of complications of oncological therapy in the gastrointestinal system. Cancer Imaging. 2012;12:163–72.

Kirchheiner K, Nout R, Lindegaard J, Petrič P, Limbergen EV, Jürgenliemk-Schulz IM, et al. Do clinicians and patients agree regarding symptoms? A comparison after definitive radiochemotherapy in 223 uterine cervical cancer patients. Strahlentherapie Und Onkol. 2012;188:933–9.

Vistad I, Cvancarova M, Fosså SD, Kristensen GB. Postradiotherapy Morbidity in Long-Term survivors after locally Advanced Cervical Cancer: how well do Physicians’ assessments agree with those of their patients? Int J Radiat Oncol. 2008;71:1335–42.

Di Maio M, Gallo C, Leighl NB, Piccirillo MC, Daniele G, Nuzzo F, et al. Symptomatic toxicities experienced during anticancer treatment: agreement between patient and physician reporting in three randomized trials. J Clin Oncol. 2015;33:910–5.

Jensen NBK, Pötter R, Kirchheiner K, Fokdal L, Lindegaard JC, Kirisits C, et al. Bowel morbidity following radiochemotherapy and image-guided adaptive brachytherapy for cervical cancer: physician- and patient reported outcome from the EMBRACE study. Radiother Oncol. 2018;127:431–9.

Besharat AR, Giannini A, Caserta D. Pathogenesis and treatments of endometrial carcinoma. Clin Exp Obstet Gynecol. 2023;50:229.

D’Oria O, Giannini A, Besharat AR, Caserta D. Management of Endometrial Cancer: Molecular Identikit and tailored therapeutic Approach. Clin Exp Obstet Gynecol. 2023;50:210.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Division of Radiation Oncology, Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand

Pooriwat Muangwong, Kittikun Kittidachanan & Imjai Chitapanarux

Division of Diagnostic Radiology, Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand

Nutthita Prukvaraporn, Nattharika Watthanayuenyong & Wittanee Na Chiangmai

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: PM, NP, IC, WN; Methodology: PM, NP, IC, WN; Investigation: PM, NP, WN; Formal analysis: PM, NP, KK, NW, WN; Writing – Original Draft: PM, NP, WN; Writing – review and editing: KK, NW, IC; Supervision: IC, WN; All authors reviewed the manuscript.

Corresponding author

Correspondence to Wittanee Na Chiangmai .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by Research Ethic Committee No. 4, Faculty of Medicine, Chiang Mai University (Approval No. 499/2021). The data collection was authorized by the faculty. Informed consent was not required by the faculty and Research Ethic Committee due to retrospective study with anonymized patient identification. This study was carried out in accordance with the Helsinki Declaration.

Clinical trial number

Not Applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Muangwong, P., Prukvaraporn, N., Kittidachanan, K. et al. Utilizing CT imaging for evaluating late gastrointestinal tract side effects of radiotherapy in uterine cervical cancer: a risk regression analysis. BMC Med Imaging 24 , 235 (2024). https://doi.org/10.1186/s12880-024-01420-3

Download citation

Received : 19 February 2024

Accepted : 02 September 2024

Published : 09 September 2024

DOI : https://doi.org/10.1186/s12880-024-01420-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cervical cancer
  • Radiotherapy
  • Gastrointestinal

BMC Medical Imaging

ISSN: 1471-2342

criteria for evaluating a research proposal

IMAGES

  1. Main Criteria for Research Proposal

    criteria for evaluating a research proposal

  2. FREE 8+ Proposal Evaluation Forms in PDF

    criteria for evaluating a research proposal

  3. FREE 8+ Proposal Evaluation Forms in PDF

    criteria for evaluating a research proposal

  4. 4. Understanding the six criteria: Definitions, elements for analysis

    criteria for evaluating a research proposal

  5. 8+ Project Evaluation Checklist Templates

    criteria for evaluating a research proposal

  6. Criteria For The Assessment of Research Proposal

    criteria for evaluating a research proposal

VIDEO

  1. YUKKURI Transforming ASPICE Assessments: A New Approach Unveiled!

  2. Lecture 17 Project Proposals

  3. Government Contracting: Introduction to Proposal Writing (Level 1)

  4. Would You Rather Marry an Ugly 6 or a Pretty 10? #shorts #wouldyourather #marriage

  5. Evaluating a Distributor: Finding the Right Fit for Your Business

  6. Request For Proposal from Procurement Lexicon

COMMENTS

  1. PDF Criteria for Evaluating Research Proposals

    Criteria for Evaluating Research Propossl.s You are asked to evaluate a proposed study, one that has been actually submitted to the Office of Education, Bureau of' Education for the Handicappedo Your professor was one of the Office of Education consultants, evaluating that research. The decision to support or disapprove this proposal has

  2. 7 CFR § 3406.20

    § 3406.20 Evaluation criteria for research proposals. The maximum score a research proposal can receive is 150 points. Unless otherwise stated in the annual solicitation published in the Federal Register, the peer review panel will consider the following criteria and weights to evaluate proposals submitted:

  3. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  4. PDF Checklist for evaluating research proposals

    A sound proposal will answer the following questions: 1. What is the research expected to accomplish, if completed successfully and on time? 2. Why is the research being undertaken? What are the reasons behind it? What is the research underlying rationale? 3. How is the research to be implemented?

  5. How to assess research proposals?

    The peer review of research proposals (grants) aims to judge the merit of projects and researchers and enable the best to be contemplated. The director of an institution in the United Kingdom shared on Twitter his struggle in evaluating the numerous proposals received and started a discussion forum from which ideas and suggestions emerged.

  6. Evaluation of research proposals by peer review panels: broader panels

    As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today. 2.2 Criteria for research quality and societal relevance. The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals.

  7. How Do I Review Thee? Let Me Count the Ways: A Comparison of Research

    The review criteria used to evaluate research grant proposals reflect the funder's approach to identifying the most relevant and impactful research to support (Geever, 2012; Gerin & Kapelewski, 2010; Kiritz, 2007). Thus, planning and preparing a successful grant proposal depends on a clear understanding of the review criteria that will be used.

  8. Criteria for assessing grant applications: a systematic review

    Based on these considerations, the criteria identified in this systematic review can be summarized as follows: evaluation criteria used by peers to assess grant applications = research quality ...

  9. A Review Committee's Guide for Evaluating Qualitative Proposals

    Abstract. Although they complain that qualitative proposals are not reviewed fairly when funding agencies use quantitative criteria, qualitative researchers have failed the system by not developing alternative criteria for the evaluation of qualitative proposals. In this article, the author corrects this deficit by presenting criteria to assess ...

  10. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management".

  11. PDF Proposal Evaluation Criteria

    Proposal Evaluation Criteria For Research and Creative Projects Research 1. Intellectual merit Does the proposal have a clear and specific research question/artistic goal? Has the student demonstrated engagement with scholarly literature on the topic? Does the proposal make a compelling argument for why the research

  12. Evaluating Research Proposals

    Comparing proposals "apples-to-apples" is crucial to establishing which one will best meet your needs. Consider these ideas to help you focus on the details that contribute to a successful survey. Make sure the proposal responds to your objectives. The proposal process begins well before you ask any research firm for quote.

  13. PDF Evaluation Criteria for Research Proposal Name

    Section 2. Introduction Provides context and historical perspective of the research Presents organizational structure of the lit. review (subheadings of body) Body Research is organized under two or more subheadings Each research study summarized addresses: sample, methods, results 5 Transitions are used to introduce each subheading Five or ...

  14. Evaluating Research in Academic Journals: A Practical Guide to

    Academic Journals. Evaluating Research in Academic Journals is a guide for students who are learning how to. evaluate reports of empirical research published in academic journals. It breaks down ...

  15. PDF RFP Writing: Evaluation & Selection Criteria

    ARACTERISTICS OF GOOD EVALUATION CRITERIAConnect to your specif. c outcome goals, metrics, and scope of work.The evaluation criteria should flow from the prior sections of your RFP, as a logical continuati. f your goals, metrics, and scope of work.2Give t. e right balance between multiple priorities.The evaluation criteria generally s.

  16. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  17. Formal Review of Research Proposals

    The proposal form asks for information about the purpose and proposed design of the study, as well as draft versions of data collection instruments. Samples of completed research proposals are available here and here. The following criteria will be used by the committee to evaluate research proposals:

  18. A Review Committee's Guide for Evaluating Qualitative Proposals

    Abstract. Although they complain that qualitative proposals are not reviewed fairly when funding agencies use quantitative criteria, qualitative researchers have failed the system by not ...

  19. What Makes a Successful Proposal

    Show how the research fits within the broader mission of the funding agency. Clear Communication and Accessibility. Write concisely and avoid jargon. Define specialized terminology if necessary. Ensure that the proposal is accessible to a diverse audience, including non-specialists who may be involved in the review process.

  20. Ten criteria for evaluating qualitative research proposals

    Ten criteria for evaluating qualitative research proposals J Nurs Educ. 1987 Apr;26(4):138-43. doi: 10.3928/0148-4834-19870401-04. Authors A K Cobb, J N ... The Research Proposal Evaluation Form: Qualitative Methodology is a partial solution to this dilemma. It provides a framework for critiquing the proposal phase of a qualitative study and ...

  21. GRANTS Q&As for Reviewers

    Q&As for Reviewers - PIER Plans. In preparation for evaluating PIER Plans as part of the merit review process, Reviewers are strongly encouraged to read through all of the informational materials regarding the PIER Plan proposal element, including the Things to Consider When Developing a PIER Plan, and the Q&As for Applicants as well as the Q&As for Reviewers below.

  22. PDF How Do I Review Thee? Let Me Count the Ways: A Comparison of Research

    the review criteria used to evaluate research grant proposals are based on a similar set of fundamental questions. In this article, we compare the review criteria of 10 US federal ... to evaluate research grant proposals reflect the funder's approach to identifying the most relevant and impactful research to support (Geever, 2012; Gerin ...

  23. NSF 24-573: EPSCoR Research Infrastructure Improvement-Focused EPSCoR

    Reviewers are strongly encouraged to review the criteria, including PAPPG Chapter II.D.2.d(i), prior to the review of a proposal. When evaluating NSF proposals, reviewers will be asked to consider what the proposers want to do, why they want to do it, how they plan to do it, how they will know if they succeed, and what benefits could accrue if ...

  24. Quantitative classification evaluation model for tight sandstone

    Tight sandstone reservoirs are a primary focus of research on the geological exploration of petroleum. However, many reservoir classification criteria are of limited applicability due to the ...

  25. NSF 23-601: Research Experiences for Undergraduates (REU)

    Reviewers are strongly encouraged to review the criteria, including PAPPG Chapter II.D.2.d(i), prior to the review of a proposal. When evaluating NSF proposals, reviewers will be asked to consider what the proposers want to do, why they want to do it, how they plan to do it, how they will know if they succeed, and what benefits could accrue if ...

  26. Utilizing CT imaging for evaluating late gastrointestinal tract side

    Background Radiotherapy (RT) is effective for cervical cancer but causes late side effects (SE) to nearby organs. These late SE occur more than 3 months after RT and are rated by clinical findings to determine their severity. While imaging studies describe late gastrointestinal (GI) SE, none demonstrate the correlation between the findings and the toxicity grading. In this study, we ...