• Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

The Craft of Writing a Strong Hypothesis

Deeptanshu D

Table of Contents

Writing a hypothesis is one of the essential elements of a scientific research paper. It needs to be to the point, clearly communicating what your research is trying to accomplish. A blurry, drawn-out, or complexly-structured hypothesis can confuse your readers. Or worse, the editor and peer reviewers.

A captivating hypothesis is not too intricate. This blog will take you through the process so that, by the end of it, you have a better idea of how to convey your research paper's intent in just one sentence.

What is a Hypothesis?

The first step in your scientific endeavor, a hypothesis, is a strong, concise statement that forms the basis of your research. It is not the same as a thesis statement , which is a brief summary of your research paper .

The sole purpose of a hypothesis is to predict your paper's findings, data, and conclusion. It comes from a place of curiosity and intuition . When you write a hypothesis, you're essentially making an educated guess based on scientific prejudices and evidence, which is further proven or disproven through the scientific method.

The reason for undertaking research is to observe a specific phenomenon. A hypothesis, therefore, lays out what the said phenomenon is. And it does so through two variables, an independent and dependent variable.

The independent variable is the cause behind the observation, while the dependent variable is the effect of the cause. A good example of this is “mixing red and blue forms purple.” In this hypothesis, mixing red and blue is the independent variable as you're combining the two colors at your own will. The formation of purple is the dependent variable as, in this case, it is conditional to the independent variable.

Different Types of Hypotheses‌

Types-of-hypotheses

Types of hypotheses

Some would stand by the notion that there are only two types of hypotheses: a Null hypothesis and an Alternative hypothesis. While that may have some truth to it, it would be better to fully distinguish the most common forms as these terms come up so often, which might leave you out of context.

Apart from Null and Alternative, there are Complex, Simple, Directional, Non-Directional, Statistical, and Associative and casual hypotheses. They don't necessarily have to be exclusive, as one hypothesis can tick many boxes, but knowing the distinctions between them will make it easier for you to construct your own.

1. Null hypothesis

A null hypothesis proposes no relationship between two variables. Denoted by H 0 , it is a negative statement like “Attending physiotherapy sessions does not affect athletes' on-field performance.” Here, the author claims physiotherapy sessions have no effect on on-field performances. Even if there is, it's only a coincidence.

2. Alternative hypothesis

Considered to be the opposite of a null hypothesis, an alternative hypothesis is donated as H1 or Ha. It explicitly states that the dependent variable affects the independent variable. A good  alternative hypothesis example is “Attending physiotherapy sessions improves athletes' on-field performance.” or “Water evaporates at 100 °C. ” The alternative hypothesis further branches into directional and non-directional.

  • Directional hypothesis: A hypothesis that states the result would be either positive or negative is called directional hypothesis. It accompanies H1 with either the ‘<' or ‘>' sign.
  • Non-directional hypothesis: A non-directional hypothesis only claims an effect on the dependent variable. It does not clarify whether the result would be positive or negative. The sign for a non-directional hypothesis is ‘≠.'

3. Simple hypothesis

A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, “Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking.

4. Complex hypothesis

In contrast to a simple hypothesis, a complex hypothesis implies the relationship between multiple independent and dependent variables. For instance, “Individuals who eat more fruits tend to have higher immunity, lesser cholesterol, and high metabolism.” The independent variable is eating more fruits, while the dependent variables are higher immunity, lesser cholesterol, and high metabolism.

5. Associative and casual hypothesis

Associative and casual hypotheses don't exhibit how many variables there will be. They define the relationship between the variables. In an associative hypothesis, changing any one variable, dependent or independent, affects others. In a casual hypothesis, the independent variable directly affects the dependent.

6. Empirical hypothesis

Also referred to as the working hypothesis, an empirical hypothesis claims a theory's validation via experiments and observation. This way, the statement appears justifiable and different from a wild guess.

Say, the hypothesis is “Women who take iron tablets face a lesser risk of anemia than those who take vitamin B12.” This is an example of an empirical hypothesis where the researcher  the statement after assessing a group of women who take iron tablets and charting the findings.

7. Statistical hypothesis

The point of a statistical hypothesis is to test an already existing hypothesis by studying a population sample. Hypothesis like “44% of the Indian population belong in the age group of 22-27.” leverage evidence to prove or disprove a particular statement.

Characteristics of a Good Hypothesis

Writing a hypothesis is essential as it can make or break your research for you. That includes your chances of getting published in a journal. So when you're designing one, keep an eye out for these pointers:

  • A research hypothesis has to be simple yet clear to look justifiable enough.
  • It has to be testable — your research would be rendered pointless if too far-fetched into reality or limited by technology.
  • It has to be precise about the results —what you are trying to do and achieve through it should come out in your hypothesis.
  • A research hypothesis should be self-explanatory, leaving no doubt in the reader's mind.
  • If you are developing a relational hypothesis, you need to include the variables and establish an appropriate relationship among them.
  • A hypothesis must keep and reflect the scope for further investigations and experiments.

Separating a Hypothesis from a Prediction

Outside of academia, hypothesis and prediction are often used interchangeably. In research writing, this is not only confusing but also incorrect. And although a hypothesis and prediction are guesses at their core, there are many differences between them.

A hypothesis is an educated guess or even a testable prediction validated through research. It aims to analyze the gathered evidence and facts to define a relationship between variables and put forth a logical explanation behind the nature of events.

Predictions are assumptions or expected outcomes made without any backing evidence. They are more fictionally inclined regardless of where they originate from.

For this reason, a hypothesis holds much more weight than a prediction. It sticks to the scientific method rather than pure guesswork. "Planets revolve around the Sun." is an example of a hypothesis as it is previous knowledge and observed trends. Additionally, we can test it through the scientific method.

Whereas "COVID-19 will be eradicated by 2030." is a prediction. Even though it results from past trends, we can't prove or disprove it. So, the only way this gets validated is to wait and watch if COVID-19 cases end by 2030.

Finally, How to Write a Hypothesis

Quick-tips-on-how-to-write-a-hypothesis

Quick tips on writing a hypothesis

1.  Be clear about your research question

A hypothesis should instantly address the research question or the problem statement. To do so, you need to ask a question. Understand the constraints of your undertaken research topic and then formulate a simple and topic-centric problem. Only after that can you develop a hypothesis and further test for evidence.

2. Carry out a recce

Once you have your research's foundation laid out, it would be best to conduct preliminary research. Go through previous theories, academic papers, data, and experiments before you start curating your research hypothesis. It will give you an idea of your hypothesis's viability or originality.

Making use of references from relevant research papers helps draft a good research hypothesis. SciSpace Discover offers a repository of over 270 million research papers to browse through and gain a deeper understanding of related studies on a particular topic. Additionally, you can use SciSpace Copilot , your AI research assistant, for reading any lengthy research paper and getting a more summarized context of it. A hypothesis can be formed after evaluating many such summarized research papers. Copilot also offers explanations for theories and equations, explains paper in simplified version, allows you to highlight any text in the paper or clip math equations and tables and provides a deeper, clear understanding of what is being said. This can improve the hypothesis by helping you identify potential research gaps.

3. Create a 3-dimensional hypothesis

Variables are an essential part of any reasonable hypothesis. So, identify your independent and dependent variable(s) and form a correlation between them. The ideal way to do this is to write the hypothetical assumption in the ‘if-then' form. If you use this form, make sure that you state the predefined relationship between the variables.

In another way, you can choose to present your hypothesis as a comparison between two variables. Here, you must specify the difference you expect to observe in the results.

4. Write the first draft

Now that everything is in place, it's time to write your hypothesis. For starters, create the first draft. In this version, write what you expect to find from your research.

Clearly separate your independent and dependent variables and the link between them. Don't fixate on syntax at this stage. The goal is to ensure your hypothesis addresses the issue.

5. Proof your hypothesis

After preparing the first draft of your hypothesis, you need to inspect it thoroughly. It should tick all the boxes, like being concise, straightforward, relevant, and accurate. Your final hypothesis has to be well-structured as well.

Research projects are an exciting and crucial part of being a scholar. And once you have your research question, you need a great hypothesis to begin conducting research. Thus, knowing how to write a hypothesis is very important.

Now that you have a firmer grasp on what a good hypothesis constitutes, the different kinds there are, and what process to follow, you will find it much easier to write your hypothesis, which ultimately helps your research.

Now it's easier than ever to streamline your research workflow with SciSpace Discover . Its integrated, comprehensive end-to-end platform for research allows scholars to easily discover, write and publish their research and fosters collaboration.

It includes everything you need, including a repository of over 270 million research papers across disciplines, SEO-optimized summaries and public profiles to show your expertise and experience.

If you found these tips on writing a research hypothesis useful, head over to our blog on Statistical Hypothesis Testing to learn about the top researchers, papers, and institutions in this domain.

Frequently Asked Questions (FAQs)

1. what is the definition of hypothesis.

According to the Oxford dictionary, a hypothesis is defined as “An idea or explanation of something that is based on a few known facts, but that has not yet been proved to be true or correct”.

2. What is an example of hypothesis?

The hypothesis is a statement that proposes a relationship between two or more variables. An example: "If we increase the number of new users who join our platform by 25%, then we will see an increase in revenue."

3. What is an example of null hypothesis?

A null hypothesis is a statement that there is no relationship between two variables. The null hypothesis is written as H0. The null hypothesis states that there is no effect. For example, if you're studying whether or not a particular type of exercise increases strength, your null hypothesis will be "there is no difference in strength between people who exercise and people who don't."

4. What are the types of research?

• Fundamental research

• Applied research

• Qualitative research

• Quantitative research

• Mixed research

• Exploratory research

• Longitudinal research

• Cross-sectional research

• Field research

• Laboratory research

• Fixed research

• Flexible research

• Action research

• Policy research

• Classification research

• Comparative research

• Causal research

• Inductive research

• Deductive research

5. How to write a hypothesis?

• Your hypothesis should be able to predict the relationship and outcome.

• Avoid wordiness by keeping it simple and brief.

• Your hypothesis should contain observable and testable outcomes.

• Your hypothesis should be relevant to the research question.

6. What are the 2 types of hypothesis?

• Null hypotheses are used to test the claim that "there is no difference between two groups of data".

• Alternative hypotheses test the claim that "there is a difference between two data groups".

7. Difference between research question and research hypothesis?

A research question is a broad, open-ended question you will try to answer through your research. A hypothesis is a statement based on prior research or theory that you expect to be true due to your study. Example - Research question: What are the factors that influence the adoption of the new technology? Research hypothesis: There is a positive relationship between age, education and income level with the adoption of the new technology.

8. What is plural for hypothesis?

The plural of hypothesis is hypotheses. Here's an example of how it would be used in a statement, "Numerous well-considered hypotheses are presented in this part, and they are supported by tables and figures that are well-illustrated."

9. What is the red queen hypothesis?

The red queen hypothesis in evolutionary biology states that species must constantly evolve to avoid extinction because if they don't, they will be outcompeted by other species that are evolving. Leigh Van Valen first proposed it in 1973; since then, it has been tested and substantiated many times.

10. Who is known as the father of null hypothesis?

The father of the null hypothesis is Sir Ronald Fisher. He published a paper in 1925 that introduced the concept of null hypothesis testing, and he was also the first to use the term itself.

11. When to reject null hypothesis?

You need to find a significant difference between your two populations to reject the null hypothesis. You can determine that by running statistical tests such as an independent sample t-test or a dependent sample t-test. You should reject the null hypothesis if the p-value is less than 0.05.

hypothesis normative research

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

FORMULATING AND TESTING HYPOTHESIS

  • In book: Basic Guidelines for Research: An Introductory Approach for All Disciplines (pp.51-71)
  • Edition: First
  • Publisher: Book Zone Publication, Chittagong-4203, Bangladesh

Syed Muhammad Sajjad Kabir at Curtin University

  • Curtin University

Abstract and Figures

Standard Deviation is a Constant Interval from the Mean.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Sutarman Sutarman

  • Kartika Maharani

Bikram Karki

  • Shamim F Karim

Asm Amanullah

  • Nazmunnessa Mahtab
  • Ismat Jahan
  • Shabrina Shajeen Alam
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • Special Article
  • Published: 04 July 2018

Normative and conceptual ELSI research: what it is, and why it’s important

  • Lisa S. Parker PhD 1 ,
  • Pamela L. Sankar PhD 2 ,
  • Joy Boyer 3 ,
  • JD Jean McEwen PhD 3 &
  • David Kaufman PhD 3  

Genetics in Medicine volume  21 ,  pages 505–509 ( 2019 ) Cite this article

2774 Accesses

20 Citations

4 Altmetric

Metrics details

The Ethical, Legal, and Social Implications (ELSI) Research Program of the National Human Genome Research Institute sponsors research examining ethical, legal, and social issues arising in the context of genetics/genomics. The ELSI Program endorses an understanding of research not as the sole province of empirical study, but instead as systematic study or inquiry, of which there are many types and methods. ELSI research employs both empirical and nonempirical methods. Because the latter remain relatively unfamiliar to biomedical and translational scientists, this paper seeks to elucidate the relationship between empirical and nonempirical methods in ELSI research. It pays particular attention to the research questions and methods of normative and conceptual research, which examine questions of value and meaning, respectively. To illustrate the distinct but interrelated roles of empirical and nonempirical methods in ELSI research, including normative and conceptual research, the paper demonstrates how a range of methods may be employed both to examine the evolution of the concept of incidental findings (including the recent step toward terming them ‘secondary findings’), and to address the normative question of how genomic researchers and clinicians should manage incidental such findings.

You have full access to this article via your institution.

Similar content being viewed by others

hypothesis normative research

Genomic medicine implementation protocols in the PhenX Toolkit: tools for standardized data collection

hypothesis normative research

A practical checklist for return of results from genomic research in the European context

hypothesis normative research

Reducing bias, increasing transparency and calibrating confidence with preregistration

In 1990, Congress appropriated funds as part of the Human Genome Project to create the Ethical, Legal, and Social Implications (ELSI) Research Program “to foster basic and applied research on the ethical, legal and social implications of genetic and genomic research” ( https://www.genome.gov/elsi/ ). The impact of ELSI research is evident in subsequent scientific and public policy, conduct of genomic research, and implementation of genomic medicine, 1 with ELSI research credited, for example, with changing the way investigators draft consent forms for genomic studies, informing policies and governance mechanisms for biobanks/repositories, advancing understanding of how people perceive risk, clarifying the meaning of race in genomic research, and influencing intellectual property law surrounding genomics.

The recognized value of ELSI research has led to increasing calls for ELSI researchers to collaborate with genomics researchers and even to embed ELSI research within genomic research projects. 1 Moreover, partly as a result of the ELSI Program’s success, the term “ELSI research” has become familiar and indeed is used beyond the context of both National Human Genome Research Institute (NHGRI) and genomics research to describe ethical, legal, and social research in other domains, including synthetic biology 2 (e.g.), neuroscience, 3 nanotechnology, 4 Big Data, 5 and other emerging technologies in information and computing science 6 and diagnostics. 7 Despite the prevalence of ELSI research related to genomics (and its increasing presence regarding other fields), the methods employed in some ELSI research—particularly normative and conceptual research—remain relatively unfamiliar, even opaque, particularly to researchers in the basic biomedical and translational sciences.

In this paper, we explain methods of ELSI research with reference to genetics/genomics research, though we believe the explanation may be of value to scientists in many fields. While several recent efforts have sought to explain the methods and value of ELSI research, for example by devising comprehensive taxonomies that include normative and conceptual research and that clarify its relationship to empirical approaches or to policy development (e.g., 8 , 9 ), we pay particular attention to the nonempirical methods employed to address normative and conceptual questions and to the nature of the questions they address. This paper proceeds from the understanding that “research” is not solely the province of empirical study, but refers more broadly to systematic study or inquiry, of which there are many types and methods. Using as an example the problem of how researchers and clinicians should manage incidental findings of genomic testing, we elucidate the range of methods used in ELSI research. In particular, we demonstrate the relationship of conceptual research—here, focused on the evolution of the meaning of ‘incidental findings’ and the introduction of ‘secondary findings’—to normative research, including the method of argument.

ELSI research on what is and what ought to be

In the classic divide between what is and what ought to be, science seeks to know what is. Science poses and pursues empirical questions (e.g., is there an association between genetic variants on chromosome 7 and cystic fibrosis?) by observation and by empirical methods (e.g., linkage analysis or genome-wide association studies [GWAS]).

ELSI research involves asking questions on both sides of the classic divide between “is” and “ought.” Some ELSI research asks what the implications of genomic research are , and employs empirical research methods to collect data to test hypotheses, evaluate programs, or develop a theory of a phenomenon. Do people receiving genetic test results suffer emotional distress? Does persistence of posttest distress correlate with pretest temperament or traits? The empirical methods used to pursue these ELSI questions are familiar not only to social scientists but to basic biomedical and translational scientists.

However, other questions cannot be addressed solely or primarily by analyzing data, because they ask questions about value and meaning. These value-focused normative questions and meaning-focused conceptual questions require nonempirical research methods, including philosophical and legal analysis (and argument), and the methods of the humanities. Answering questions of value and meaning requires evaluating, among other data, what people think, and then providing reasoned arguments to inform subsequent consideration of the issues, as well as future directions for research.

While scientific research pursues what “is” in terms of observable facts, conceptual research focuses on “is” in the special sense of “what is meant by.” What, for example, is meant by “health,” “race,” or “research”? Of course, one could conduct an empirical study and discover that, for example, 65 of 100 people surveyed think of race as a biological fact, a grouping of people according to shared biological features or physical traits. Conceptual research, however, is not concerned solely with learning what people mean by the concepts they use, but also seeks to understand the origins, variety, and implications of these understandings. Conceptual research frequently offers, and perhaps argues for, alternative conceptualizations. Conceptual research on race may, for example, examine the origins of the concept and argue that use of “racial categories” in other research should not be separated from the genealogy or history of the concept (e.g., 10 ), or argue that race is a socially constructed concept and should not be reified (e.g., 11 , 12 ). Conceptual research may examine the implications, for example, of viewing and using race as a “sorting schema.” 13

Further, in contrast to science’s focus on discovering what “is,” normative research is concerned with “ought” or with what action (or policy or practice) is ethically justified or most appropriate. If a study reveals an incidental finding (e.g., misattributed parentage), should an investigator reveal that finding? If so, to whom? Should the informed consent process specify whether incidental findings will be revealed? If an incidental finding has health implications for other family members, should the law permit or compel a clinician to attempt to inform those family members, perhaps over the objection of the patient tested? Data—about family members’ preferences, different cultural views (for example, regarding paternity), the terms of the informed consent document, studies of psychosocial responses to return of incidental findings, and existing laws—are relevant to addressing these normative questions, because they may help to inform the analysis. However, these facts alone cannot answer questions of what should be done or what a policy or law should be.

Argument: a method of normative research

Normative research seeks to discover, and inform or persuade people, what they ought to do, according to some set of norms or values. These may include ethical, legal, religious, and cultural values. Just as scientific theories may be evaluated and compared in light of desirable features such as their internal consistency, simplicity, explanatory power, and “fit” with other theories, 14 normative research may examine different arguments and sets of values to assess consistency, utility, scope, and fit with other value commitments.

To scientists—or to those steeped in what they understand to be a largely value-free scientific tradition that seeks the truth with an open mind—a research method involving making arguments in support of a claim may appear spurious. Unlike scientists who must not assume what they seek to prove, researchers engaged in normative argument appear to begin with their conclusion. In ethical (and many legal) arguments, this is a value-laden claim (e.g., that X is right, that Y is justified, that Z is fair or beneficial, that people’s preferences are the most important consideration, or that the expectations established during informed consent must be fulfilled). Then normative researchers seek ways of reasoning, as well as empirical evidence, that support that claim.

The most methodologically sound arguments consider counterarguments and do not simply ignore contrary normative claims. Instead they attempt to refute them, perhaps by showing that they embrace inconsistencies, lead to untenable conclusions, or support undesirable practical consequences. They also do not simply ignore empirical evidence that seems not to support the conclusion, but instead explain why that evidence is flawed, not really relevant, misinterpreted, or ambiguous. Responding to counterarguments and contrary evidence in this way can maintain and even strengthen support for the original claim or conclusion.

Both scientific and normative claims are falsifiable, but differently falsifiable. Sometimes the investigator making a normative argument discovers that her initial claim cannot be upheld. She discovers that her argument cannot really be made—or is not as strong—as she anticipated. Perhaps she was partly or completely wrong about the empirical facts or the relationship of ideas, or overlooked some considerations (empirical or normative). Perhaps the argument proved to be contradictory. She may then modify her claim/conclusion, perhaps making it more modest or adding some qualifications or constraints to her position. Or, she may abandon her original conclusion/claim, and pursue another line of reasoning. Just as an hypothesis may not be supported or may be disproved, a normative claim can fail to be supported or be ultimately unsupportable or untenable. Just as in science, such a refutation or negative finding may still advance basic understanding or suggest new avenues of inquiry or claims to be examined.

In the absence of, or prior to normative ELSI research on a topic, people—scientists, clinicians, the public, and ELSI researchers themselves—may have more or less well-grounded opinions, for example, about which incidental findings should be offered to research participants or patients. Normative research lays out the range of possible opinions, indicates which are more strongly supported than others, and establishes consistency among well-grounded opinions. People, including ELSI researchers, may still disagree about particular issues or frameworks for considering issues, but basic conceptual and normative research advances our understanding and allows the arguments between disagreeing parties and positions to become more clearly focused on remaining points of disagreement while acknowledging common ground. We—both those engaged in ELSI research and those who are the audience for it—move toward ever more nuanced arguments. Along the way, ELSI investigators frequently find points of overlap and areas where compromise of apparently opposing positions can be justified to those on both sides of an argument (e.g., 15 ). Policies and practices supported by strong ethical arguments may be considered the applications of basic normative research, much as basic science is translated into or applied in evidence-based medicine.

Then, another wave of empirical ELSI research may undertake to evaluate these policies and practices—for example, surveying the public’s acceptance of a policy implemented regarding the return of incidental findings of various types, or studying the psychosocial response to the return of such findings. In an iterative process, these empirical findings can then be used to inform future normative analyses, which in turn are used to shape future policies and practices.

Using as an example the question of how genomic researchers and clinicians should manage incidental findings, we illustrate the distinct but interrelated roles in ELSI research of empirical and nonempirical methods, including normative and conceptual research.

ELSI research regarding the management of genomic incidental findings

Prior to and during the Human Genome Project, incidental findings in genetics were often conceptualized as unanticipated or unexpected findings. 16 Misattributed paternity discovered by clinicians in the course of carrier testing or by researchers in family studies, as well as a growing understanding (and number of cases) of pleiotropy, were the most frequently noted examples. 17 Indeed, such findings were discovered with sufficient frequency that they were far from unexpected in the field, even if they were unexpected/unanticipated on the part of particular individuals and families. As management of incidental findings of genomic research became a more pressing concern, the need emerged for conceptual analysis about the very concept of “incidental findings,” a term that was borrowed from diagnostic and research uses of imaging technologies. 18 Could they be conceptualized as unanticipated or unexpected findings if investigators and clinicians were being urged to plan for their discovery? Conceptual analysis suggested that explaining them as “a finding concerning an individual research participant that has potential health or reproductive significance and is discovered in the course of conducting research but is beyond the aims of the study” 19 was more helpful to the normative project of recommending that investigators plan for their management.

With increasing recognition that genomic technologies—especially genome or exome sequencing—give rise to incidental findings, it became clear that investigators and clinicians needed to plan to manage these findings, i.e., to decide whether and which findings to disclose to research participants and patients, and how to disclose them. 20 These normative questions about what ought to be done cannot be addressed by empirical data alone; assessing people’s preferences regarding return of incidental findings, for example, cannot answer the question of whether the findings should be returned or under what conditions. Although data about people’s preferences are relevant to drafting policies about return of incidental findings, the normative policy question (the “ought”) must also be informed by normative (value-based) analysis of why and how much preferen1ces should matter for policy. 2 Moreover, given people’s differing preferences, a normative position must be taken on whose preferences should matter or matter most. Which stakeholders’ preferences should be given most weight: those whose samples actually yield incidental findings, all those involved in the study who might have incidental findings discovered, or professionals (investigators or clinicians)?

More recently, it has been argued that there is an ethical obligation to search intentionally for such findings when performing clinical sequencing for a primary diagnostic question. 21 This normative stance has led to further conceptual analysis criticizing the very idea of an incidental finding and supporting reconceptualization of these findings as secondary findings: findings of “a deliberate search for pathogenic or likely pathogenic alterations in genes that are not apparently relevant to a diagnostic indication for which the sequencing test was ordered.” 3 The change is not merely in terminology; it marks a shift in normative stance among some ethicists, genomicists, and clinicians. Even those who disagree—i.e., who do not endorse there being an obligation to search for or offer back such findings—tend to embrace the concept of secondary findings, if only to communicate effectively with those with whom they disagree.

ELSI research employs both empirical and nonempirical research methods. Conceptual research increases conceptual clarity by examining what is meant by various terms and ideas, and arguing that some understandings are better justified than others. Ethical, social, and legal issues—and research on those issues—frequently have conceptual components whose meaning may be contested. Conceptual research may begin by mapping out and comparing possible meanings, but usually makes an argument that there are good reasons to conceptualize something—e.g., race, incidental findings, preferences—in a particular way. Based on reasons specified, something should be understood to have a particular meaning or should be conceptualized in a particular way.

After a requisite degree of conceptual clarity has been achieved, normative research may map out a range of possible courses of action or reasonable positions to take in regard to a question of value, i.e., a question about what should be valued or what should be done. Normative research then seeks to evaluate these options—usually with the goal of guiding action.

Normative research proceeds by making arguments that are grounded in value commitments, supported by empirical evidence, and assessed in terms of their consistency, utility, scope, and fit with other value commitments and empirical data. Normative research may study, for example, what role people’s preferences should play in determining whether incidental/secondary findings should be offered to participants in genomic research. According to some normative arguments, people’s preferences should be decisive or at least should be given great weight. 22 , 23 Other arguments are made, however, that people’s preferences are only one consideration among many, such as the economic costs and psychological impact of return, the result’s clinical utility, the impact of return on healthcare utilization, and effects on therapeutic misconception, public understanding of research, and researcher–participant relationships. 23 , 25 , 26 Normative analysis must be invoked to address the normative question of which data should be brought to bear on policy development, and how. While empirical ELSI research can, for example, survey people’s preferences, measure healthcare utilization, and elucidate any constraints imposed by existing law, normative research provides arguments about how to weigh these considerations.

ELSI research employs both empirical and nonempirical research methods. Conceptual research increases conceptual clarity by examining what is meant by various terms and ideas, and arguing that some understandings are better justified than others. Normative research proceeds by making arguments that are grounded in value commitments, supported by empirical evidence,and assessed in terms of their consistency, utility, scope, and fit with other value commitments and empirical data. Just as scientific research that seeks to discover facts and explain phenomena does not result in “once and for all,” immutable accounts of what “is,” ELSI research yields guidance for action that also evolves as conditions change, new information is discovered, and better arguments are made. 27

McEwen JE, Boyer JT, Sun KY, et al. The Ethical, Legal, and Social Implications Program of the National Human Genome Research Institute: reflections on an ongoing experiment. Annu Rev Genomics Hum Genet . 2014;15:481–505.

Article   CAS   Google Scholar  

Ancillotti M, RerimassieV, Seitz SB, Steurer W. An update of public perceptions of synthetic biology: Still undecided? NanoEthics 2016;10:309-325.

Article   Google Scholar  

Green RM. The Need for a Neuroscience ELSI Program. Hastings Center Report 2014;44(4): inside back cover. https://doi.org/10.1002/hast.333 . Accessed 9 June 2018.

Google Scholar  

Nakagawa Y, Shiroyama H, Kuroda K, Suzuki T. Assessment of social implications of nanotechnologies in Japan: Application of problem structuring method based on interview surveys and cognitive maps. Technological Forecasting and Social Change 2010;77:615-638.

Tractenberg, Rochelle E. Creating a culture of ethics in biomedical big data: Adapting ‘guidelines for professional practice’to promote ethical use and research practice. The Ethics of Biomedical Big Data . Springer, Cham, 2016;367-393.

Michelfelder, DP. Dirty hands, speculative minds, and smart machines. Philosophy & Technolog y 2011;24:55-68

Lucivero, F. The promises of emerging diagnostics: From scientists’ visions to the laboratory bench and back. Ethics on the Laboratory Floor, Palgrave Macmillan, London, 2013;151-167.

Chapter   Google Scholar  

Mathews DJ, Hester DM, Kahn J, et al. A conceptual model for the translation of bioethics research and scholarship. Hastings Cent Report . 2016;46:34–39.

Ives J, Draper H. Appropriate methodologies for empirical bioethics: it's all relative Bioethics 2009; 23: 249-58. doi: 10.1111/j.1467-8519.2009.01715.x

Hirschman C. The origins and demise of the concept of race. Popul Dev Rev . 2004;30:385–415.

Duster T. Race and reification in science. Science . 2005;307(5712):1050–1.

Duster T. A post-genomic surprise. The molecular reinscription of race in science, law and medicine. Br J Sociol . 2015;66:1–27.

Fullwiley D. Race, genes, power. Br J Sociol . 2015;66:36–45.

Kuhn TS. The essential tension: selected studies in scientific tradition and change. Chicago: University of Chicago Press; 1977.

Kaphingst KA, Ivanovich J, Biesecker BB, et al. Preferences for return of incidental findings from genome sequencing among women diagnosed with breast cancer at a young age. Clin Genet . 2016;89:378–84.

Wolf SM. The challenge of incidental findings. J Law Med Ethics . 2008;36:7–9.

National Institutes of Health, Office for Protection from Research Risks (OPRR). Human Genetic Research, Institutional Review Board Guidebook. 1993. https://www.genome.gov/10001752/protecting-human-research-subjects-guide/ . Accessed 9 June 2018.

Kohane IS, Masys DR, Altman RB. The incidentalome: a threat to genomic medicine. JAMA . 2006;296:212–5.

Wolf SM, Lawrenz FP, Nelson CA, et al. Managing incidental findings in human subjects research: analysis and recommendations. J Law Med Ethics . 2008;36:219–48.

Parker LS. Returning individual research results: what role should people’s preferences play? Minn J Law Sci Technol . 2012;13:449–84.

Green RC, Berg JS, Grody WW, et al. ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing. Genet Med . 2013;15:565–74.

Ploug T, Holm S. Clinical genome sequencing and population preferences for information about ‘incidental’ findings—From medically actionable genes (MAGs) to patient actionable genes (PAGs). PLoS ONE . 2017;12:e0179935 https://doi.org/10.1371/journal.pone.0179935

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ossorio P. Taking aims seriously: repository research and limits on the duty to return individual research findings. Genet Med . 2012;14:461–6.

Beskow LM, Burke W. Offering individual genetic research results: context matters. Sci Transl Med . 2010;2:38cm20. doi:10.1126/scitranslmed.3000952.

Wynn J, Martinez J, Duong J, et al. Association of researcher characteristics with views on return of incidental findings from genomic research. J Genet Couns . 2015;24:833–41.

Klitzman R, Buquez B, Appelbaum PS, et al. Processes and factors involved in decisions regarding return of incidental genomic findings in research. Genet Med . 2014;16:311–7.

Murphy BJ, Bridges JF, Mohamed A, Kaufman D. Public preferences for the return of research results in genetic research: a conjoint analysis. Genet Med . 2014;16:932–9.

Download references

Acknowledgements

The authors wish to acknowledge their fruitful discussions regarding normative and conceptual ELSI research with members of the National Human Genome Research Institute’s Genomics and Society Working Group ( https://www.genome.gov/27551917/the-genomics-and-society-working-group/ ).

Author information

Authors and affiliations.

Center for Bioethics & Health Law, University of Pittsburgh, 519 Barco Law Building, 3900 Forbes Ave, Pittsburgh, Pennsylvania, 15260, USA

Lisa S. Parker PhD

Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, USA

Pamela L. Sankar PhD

NHGRI ELSI Program Director, National Institutes of Health, Building 31, Room 4B09 31 Center Drive, MSC 2152, 9000, Bethesda, Maryland, 20892-2152, USA

Joy Boyer, JD Jean McEwen PhD & David Kaufman PhD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lisa S. Parker PhD .

Ethics declarations

The authors declare no conflicts of interest.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Parker, L.S., Sankar, P.L., Boyer, J. et al. Normative and conceptual ELSI research: what it is, and why it’s important. Genet Med 21 , 505–509 (2019). https://doi.org/10.1038/s41436-018-0065-x

Download citation

Received : 13 December 2017

Accepted : 04 May 2018

Published : 04 July 2018

Issue Date : February 2019

DOI : https://doi.org/10.1038/s41436-018-0065-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • ELSI research
  • incidental findings
  • normative issues
  • research methods
  • values and meaning

This article is cited by

The ethical, legal, and social implications of genomics and disability: findings from a scoping review and their human rights implications.

  • Maria Vassos
  • Rhonda Faragher

Advances in Neurodevelopmental Disorders (2024)

Babyboomer weg, Wissen weg – Partizipative Entwicklung eines KI-basierten, selbstlernenden Assistenzsystems zur Erfassung und Sicherung von implizitem Wissen in der Produktion

  • Nicole Ottersböck
  • Christian Prange
  • Sven Peters

Zeitschrift für Arbeitswissenschaft (2024)

Missed opportunities for AI governance: lessons from ELS programs in genomics, nanotechnology, and RRI

  • Maximilian Braun
  • Ruth Müller

AI & SOCIETY (2024)

Mapping international research output within ethical, legal, and social implications (ELSI) of assisted reproductive technologies

  • Zacharie Chebance
  • Vardit Ravitsky

Journal of Assisted Reproduction and Genetics (2023)

Study protocol comparing the ethical, psychological and socio-economic impact of personalised breast cancer screening to that of standard screening in the “My Personal Breast Screening” (MyPeBS) randomised clinical trial

  • Alexandra Roux
  • Rachel Cholerton
  • Sandrine de Montgolfier

BMC Cancer (2022)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

hypothesis normative research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Strong Hypothesis | Steps & Examples

How to Write a Strong Hypothesis | Steps & Examples

Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .

Example: Hypothesis

Daily apple consumption leads to fewer doctor’s visits.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more types of variables .

  • An independent variable is something the researcher changes or controls.
  • A dependent variable is something the researcher observes and measures.

If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias  will affect your results.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism. Run a free check.

Step 1. ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2. Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.

Step 3. Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

4. Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

5. Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in  if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

  • H 0 : The number of lectures attended by first-year students has no effect on their final exam scores.
  • H 1 : The number of lectures attended by first-year students has a positive effect on their final exam scores.
Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is high school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. High school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved September 13, 2024, from https://www.scribbr.com/methodology/hypothesis/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Open access
  • Published: 04 April 2015

The normative background of empirical-ethical research: first steps towards a transparent and reasoned approach in the selection of an ethical theory

  • Sabine Salloch 1 ,
  • Sebastian Wäscher 1 ,
  • Jochen Vollmann 1 &
  • Jan Schildmann 1  

BMC Medical Ethics volume  16 , Article number:  20 ( 2015 ) Cite this article

11k Accesses

19 Citations

2 Altmetric

Metrics details

Empirical-ethical research constitutes a relatively new field which integrates socio-empirical research and normative analysis. As direct inferences from descriptive data to normative conclusions are problematic, an ethical framework is needed to determine the relevance of the empirical data for normative argument. While issues of normative-empirical collaboration and questions of empirical methodology have been widely discussed in the literature, the normative methodology of empirical-ethical research has seldom been addressed. Based on our own research experience, we discuss one aspect of this normative methodology, namely the selection of an ethical theory serving as a background for empirical-ethical research.

Whereas criteria for a good ethical theory in philosophical ethics are usually related to inherent aspects, such as the theory’s clarity or coherence, additional points have to be considered in the field of empirical-ethical research. Three of these additional criteria will be discussed in the article: (a) the adequacy of the ethical theory for the issue at stake, (b) the theory’s suitability for the purposes and design of the empirical-ethical research project, and (c) the interrelation between the ethical theory selected and the theoretical backgrounds of the socio-empirical research. Using the example of our own study on the development of interventions which support clinical decision-making in oncology, we will show how the selection of an ethical theory as a normative background for empirical-ethical research can proceed. We will also discuss the limitations of the procedures chosen in our project.

The article stresses that a systematic and reasoned approach towards theory selection in empirical-ethical research should be given priority rather than an accidental or implicit way of choosing the normative framework for one’s own research. It furthermore shows that the overall design of an empirical-ethical study is a multi-faceted endeavor which has to balance between theoretical and pragmatic considerations.

Peer Review reports

Empirical-ethical research constitutes a relatively new field of enquiry which is characterized by the fact that socio-empirical research and ethical analysis are integrated for the treatment of concrete moral questions in modern medicine [ 1 ]. A broad variety of methodologies for empirical-ethical research has been suggested in recent years [ 2 - 4 ] and has been applied to concrete studies [ 5 - 7 ]. In this article, the argumentative structure upon which empirical-ethical research is based will be understood as “mixed judgments”, which contain both normative and descriptive or prognostic propositions ([ 8 , p. 9]). Regarding the methodology of empirical-ethical research, all the different aspects of this argumentative structure should be considered: the justification and origin of the normative premises, the development of the empirical premises and the integration of both into an ethical judgment. However, not all of these parts are currently addressed in the literature on empirical-ethical research to the same extent.

The question of normative-empirical interaction , i.e. the interplay between empirical data and normative elements, has been extensively investigated in recent years [ 9 - 11 ]. Methodological questions related to the empirical research which forms part of empirical-ethical studies have also been debated to a considerable extent [ 12 , 13 ]. By contrast, the normative methodology of empirical-ethical research remains rather underexposed so far. Therefore, this article focuses on the normative side of empirical-ethical research methodology and aims to shed light on one particular aspect: based on our experience, we will make a suggestion of how to proceed in the selection of a normative background for a concrete empirical-ethical study. Our own ETHICO project (“Empirical-Ethical Interventions in Oncology”), which forms a multistep empirical-ethical study for the development of interventions supporting clinical decision-making, made us aware of key aspects which are relevant for the selection of a normative background.

In the first section of this article, we discuss several meta-theoretical preconditions which underlie the idea of theory selection in ethics. We then address the consequences which emanate from the pluralism of ethical theories for philosophical ethics and for the applied ethics domain. Subsequently, we will present three criteria which we encountered as relevant in our own empirical-ethical study: the adequacy of the ethical theory for the issue at stake, the suitability of the theory for the purposes and design of the empirical-ethical research project and the interrelation between the ethical theory selected, and the theoretical backgrounds of the socio-empirical research. These criteria will be illustrated by reference to the ETHICO project and the limitations of our theory selection will be discussed. The article closes with a short summary of the main points developed.

Our main aim in this article is to provide, based on our own experience, a first suggestion of how to develop a strategy for utilizing normative-ethical theories in empirical-ethical research. However, the topic of theory selection in empirical-ethical research touches on a number of fundamental problems in ethical theory and the philosophy of science, such as whether the plurality of normative-ethical theories can be reduced to one overarching approach [ 14 ] or whether a rational selection between scientific theories is possible at all [ 15 , 16 ]. These and other challenges can only be dealt with superficially in this article. We do not aim to provide answers to these highly controversial issues, but to stimulate and enhance the current research practice of empirical-ethical studies.

Meta-theoretical considerations

The question of how to determine, justify and make the normative framework explicit for empirical-ethical research is a crucial one because direct inferences from descriptive data to normative conclusions are problematic for theoretical, methodological and pragmatic reasons [ 17 ]. Furthermore, depending on the normative background chosen, the impact of the empirical data for ethical judgment differs with regard to which type of data is needed and how it is processed within normative deliberation. There are different types of normative background which could be principally considered when conducting empirical-ethical research. Researchers can, for example, refer to a common morality or they can build on their private moral opinion. The potential of philosophical-ethical theory for concrete questions in medical ethics has been doubted for various reasons, such as a perceived lack of practical usefulness or problems arising from morally pluralistic societies [ 18 , 19 ]. However, the authors of this article appreciate the potential of philosophical-ethical theories to be utilized for empirical-ethical research. One reason for this position lies in the idea that philosophical-ethical theories (in contrast to theories which have a descriptive or explanatory character) do not primarily aim to fit the world as it is, but to guide human agency [ 20 ]. Due to this reverse “direction of fit” of philosophical-ethical theories compared to other types of theory, the question of the justification or well-foundedness of the theory is of particular importance in ethics. Ethical theories are usually based on elaborated accounts of normative justification. Building on such theories permits an external critical evaluation of the moral issues at stake. By contrast, only referring to the “lived morality” of stakeholders’ moral experiences and beliefs is fraught with the danger of perpetuating wrongful practices. In this respect, we see the benefit of utilizing philosophical-ethical theories for empirical-ethical research.

This article should generally be understood as a plea for a more transparent and reasoned approach in selecting an ethical-theoretical background for empirical-ethical research. Consequently, we will start from the assumption that there is a plurality of coexisting normative-ethical theories which could be applied to a concrete issue [ 21 ]. In contrast to philosophy of science, where skepticism towards a universal theory of justification has been widespread since the beginning of the 20 th century, the idea of a comprehensive theory is still vivid in ethical theory ([ 22 ], p. 312 f.). However, a generally accepted overarching ethical meta-theory, integrating all accounts without losing their specific perspectives, is not available. Following Julian Nida-Rümelin’s coherentist approach, the plurality of normative-ethical accounts mirrors the variety within our actual moral thinking ([ 22 ]; p. 314 f.). In our everyday moral judgments, we operate with a diversity of moral concepts and criteria, such as rights, duties and principles. Theory selection in empirical-ethical research, along these lines, can be understood as the question of what aspect of moral thinking should have priority in the current discussion. While different theoretical accounts could be applied similarly to a certain subject there are also reasons, why one theory might be a better fit than another. Our article tries to elucidate some of these reasons and to make a suggestion of how to proceed explicitly and deliberately in theory selection.

The issue of theory selection is not a matter of rational argument alone. In research practice, personal and biographical factors and pragmatic considerations of acceptance in the scientific community have a strong influence on researchers’ decisions regarding which ethical theory to choose. Reflection on the researchers’ own socio-cultural embeddedness is, thus, crucial for dealing with conflicts of interest and biases in bioethical research [ 23 ]. In the context of this article, we would like to stress the need to develop a critical stance towards one’s own ethical-theoretical commitments based on systematic criteria for theory selection. Along these lines, we will start by discussing the consequences which arise from the plurality of normative-ethical theories for philosophical ethics and applied ethics.

Dealing with the pluralism of ethical theories

In philosophical ethics, the plurality of normative theories does not usually lead to major practical problems. By contrast, within the ethical-theoretical sphere, the diversity of accounts about normative justification often serves as a reference point for fruitful discussions about ethical concepts and the general nature of morality. There is also an intensive reflection in philosophy about the interrelation between different types of theory, such as contractualist and consequentialist accounts. In general, a large number of current discussions in philosophical ethics are (next to meta-ethical topics) dedicated to issues of discussing, modifying and combining divergent accounts of normative justification.

This situation of (more or less) harmonic coexistence changes when we leave the theoretical realm and enter the field of applied ethics. The relationship between the emerging branch of empirical-ethical research and the more traditional idea of “applied ethics” is ambiguous. In this article, the term “applied ethics” is used in a rather broad sense, and is not restricted to one particular methodology or to so-called “top down” -approaches ([ 24 ], p. 321 ff.). Empirical-ethical research will, therefore, be regarded as one option to follow working in applied ethics. Applied ethics (including empirical-ethical research) is supposed to deal with concrete ethical problems. If normative solutions are provided, this often has far-reaching consequences for society and the future of individuals. Using different ethical theories as starting points can lead to divergent answers regarding concrete ethical problems [ 24 , 25 ]. A main difference can be observed between consequentialist theories and theories arguing on a deontological basis, for example, in referring to human dignity. These approaches often result in divergent normative evaluations, for example with regard to the question of how much protection must be given to early forms of human life. Hence, it often makes a great difference in the sphere of applied ethics whether a certain problem is treated on one ethical-theoretical background or the other. Or, as Konrad Ott puts it: “If you are unlucky, you will catch an adherent of Singer, Tooley or colleagues when you are a disabled infant, or as an asylum seeker, somebody who defends a mixture of ‘hard’ communitarianism and evolutionary ethics” ([ 26 ], p. 73; own translation).

The selection of an ethical theory underlying one’s own research is, thus, a crucial factor which influences the outcome of applied ethics, including empirical-ethical research. Unfortunately, the topic is rarely addressed in textbooks or introductory seminars on applied ethics. Philosophical-ethical theories are presented and discussed much more often without much explanation regarding their inherent relatedness or the ways of selecting one for the treatment of a concrete ethical issue [ 27 - 29 ]. It is also rather uncommon for authors writing in applied ethics to declare explicitly why they feel themselves committed to a specific ethical theory ([ 30 ], p. 62). In some instances, this may be an issue of the author’s individual preferences or a byproduct of their academic socialization. However, in light of the practical relevance of applied ethics work, the selection of a normative background is far from being of mere scholarly interest. Instead, it is closely linked to the researcher’s commitment to diligence and transparency and, thus, can be regarded as a core aspect of the ethics of carrying out ethics research.

Criteria for theory selection: theoretical considerations and their application in the ETHICO project

How, then, can the selection of a normative background proceed with regard to empirical-ethical research? In the following, three criteria for theory selection which we encountered as being relevant in our own project will be discussed. The ETHICO project aims at the development of interventions which support clinical decision-making on an empirical-ethical basis. It has a double-tracked structure encompassing qualitative empirical research as well as normative analysis. The interrelation between descriptive and normative aspects and theories applied is of central interest within the project. The ETHICO project has a model character and takes place in the oncology department of one German hospital [ 31 ]. It consists of six stages which range from the identification of ethical problems up to the development, implementation and evaluation of an intervention to support the clinical decision-making. The general structure of the project can be seen below:

The ETHICO project (“Empirical-ethical interventions in oncology”)

Main Features:

→ development of interventions to support clinical decision-making in oncology

→ empirical-ethical methodology

“Preparation“

development of a framework for the empirical-ethical intervention project

“Exploration “

qualitative research to identify ethical challenges in the context of an oncologic department

“Deliberation“

development of an empirically and normatively founded solution for the ethical problems identified

“Development“

development of empirical-ethical interventions to support the decision making

“Intervention“

implementation of the interventions (communication guidance for the ward team, leaflet with questions for the patients)

“Evaluation“

evaluation of the intervention and the overall study concept.

The requirements for the selection of a normative background in the ETHICO project differ from both philosophical ethics and “general” applied ethics. In philosophical ethics , normative theories are typically understood as systematic accounts regarding the question of what constitutes morally right or wrong actions, or – in an evaluative sense – good or bad human conduct ([ 32 ], p. 1). Criteria for the good quality of an ethical theory are mainly related to aspects which are inherent to the theory itself, such as clarity, coherence and simplicity ([ 33 ], p. 352 ff.). These criteria are not only valid for ethical theory, but are also well-known from the philosophy of science regarding other fields of scientific enquiry. A main focus in philosophical ethics, thus, lies on the well-foundedness and acceptability of a normative-ethical account, while only rarely other more practice-related requirements towards an ethical theory are discussed.

When we enter the field of applied ethics , additional criteria become relevant regarding the question of what constitutes a “good ethical theory.” Applied ethics, as a practice-related discipline, is supposed to deliver answers and support with regard to concrete and often urgent ethical issues, such as challenges arising from new technologies or societal developments. Applied ethics is, thus, expected to provide a normative orientation which is helpful in practice. To develop such an action-guiding potential, the normative theory applied, for example, needs to be compatible with the stakeholders’ actual moral thinking and behavior ([ 24 ], p. 325). Therefore, not only the theory’s acceptability (in the sense of its well-foundedness), but also its factual acceptance by the stakeholders has to be considered. Hence, ethical theories in applied ethics – more than in the “pure” philosophical realm – gain the character of “problem-serving tools”. This leads to additional, more pragmatic requirements becoming relevant compared to the use of the theories in the philosophical ethics field [ 34 ].

In the specific interdisciplinary field of empirical-ethical research , of which the ETHICO project forms part, even more facets must be considered for the selection of a normative background. Socio-empirical research in ethics serves a variety of different aims, ranging from the mere description of ethically relevant attitudes, through normative evaluations of the respective practices, to intervening in practice to modify people’s behavior [ 9 , 35 ]. Furthermore, empirical-ethical research forms a particularly intricate research field, as the demands of empirical research methodology have to be integrated with criteria for a good normative-ethical analysis [ 36 ]. In the following, we will discuss three relevant requirements which we encountered in our own empirical-ethical study. The aspects do not provide an exhaustive list of all relevant considerations; other aspects are also important, such as the researchers’ competence to deal with certain theoretical concepts or the acceptance of the theory’s results by the stakeholders involved. The three points discussed can, thus, be seen as main examples for what has to be considered when selecting the normative background for an empirical-ethical research project. All three aspects will firstly be first presented in a more abstract manner. Subsequently, the concrete outcome of theory selection in the ETHICO project will be briefly sketched.

The adequacy of the ethical theory for the issue at stake has to be considered in the selection of a theoretical background

An ethical theory which is selected to underlie an empirical-ethical research project has to fit the study’s thematic subject. Subject here means the practical problems and the empirical context to which the study is directed. This idea of a fit between an ethical theory and its subject needs further explanation: Most philosophical-ethical theories are – by their own self-understanding – not restricted to a specific field of practice, but are supposed to guide human action in general ([ 26 ], p. 73). However, normative concepts and principles contain descriptive elements [ 37 ] in referring to aspects of human life, such as happiness, preferences, moral experience, and much more. Hence, empirical characteristics form part of normative premises which are integrated into ethical judgment. It can, therefore, be argued that an ethical theory fits a certain subject if it is in line with salient features of the issue which is under discussion. Ethical theories can also fail in this respect, by not matching particular problems. Utilitarianism, for example, has its focus on particular aspects of the social reality (such as benefit and harm), but does not capture other aspects which are of equal importance to understand the moral phenomenon under discussion. A utilitarian account, therefore, might not adequately address ethical questions about caring relations at the end of life.

A second line of argument which is important for the fit between an ethical theory and the issue at stake is the more pragmatic consideration of an alignment between the ethical-theoretical background and the stakeholders’ actual moral deliberation and behavior, which form part of the moral phenomenon observed. If the ethical theory applied does not fit the stakeholders’ actual moral thinking, ethical interventions might not be successful ([ 24 ], p. 321). Therefore, the ethical theory selected also has to be in line with the lived normativity already embedded in practice.

It is also important to take into consideration that theories which, at first glance, do not seem to fit a certain subject very well, bear the potential of shedding a new and fresh light on the issue at stake, especially if they highlight aspects which might have been overlooked without that specific theoretical perspective. Using a virtue ethics framework for questions of distributive justice, for example, might not be the most obvious option, but helps to gather a new and broader understanding of the issue. The capability approach, for instance, rests on the idea of human thriving and, therefore, goes beyond “traditional” approaches to distributive justice which focus on personal utility, negative freedoms or comparisons of resource holdings [ 38 ].

Considering the relationship between an ethical theory and the specific research topic is relevant for applied ethics in general. However, it occurs in a specific form with respect to empirical-ethical research. Empirical-ethical research bears the characteristic that many features of the issue at stake are not well-known prior to the empirical research, because a better knowledge about the empirical characteristics is one of the expected outcomes. The fit of a theoretical background, therefore, often cannot be fully evaluated ex ante in empirical-ethical research. How then can the matching between the study’s subject and the ethical theory proceed in empirical-ethical studies? Different strategies could be principally applied here. A first option would be to determine the ethical-theoretical background before starting on the empirical research. As – depending on the theory chosen – different types of data are needed (e.g. a preference utilitarian needs different empirical data for the ethical analysis compared to a virtue theorist [ 17 ]), the choice of a specific theory has a strong influence on the selection and processing of empirically gathered data. This would lead to a preponderance of ethical theory which might contradict the methodological requirements and aims of social science research – especially in the case of qualitative social research. A second strategy would, thus, be to conduct the empirical research first and then designate an ethical theory for normative evaluation. However, going along this road would miss the potential of ethical theories for gaining a more comprehensive understanding of an empirical issue. Therefore, the third (and preferable) strategy is to stay rather undetermined in terms of ethical theory before the beginning of data gathering. If the field is then approached empirically, the researcher has the opportunity to select a theory which is best suited to capture the relevant features of the ethical problem. A constant interchange between the normative theory selected and the empirical data ensures that both sides are respected as playing an important role in the empirical-ethical research process ([ 4 ], p. 473 f.). The reflection has a cyclical character: the empirical phenomenon is understood against the background of the ethical theory and, on the other hand, helps to provide a deeper understanding of the semantic implications of the normative concepts and principles applied.

It was important for the ETHICO project that the normative background could be linked with the core features of oncologic practice, especially with empirical aspects of decision-making in advanced cancer. However, the researchers did not have full knowledge about the clinical practice in the department before conducting the empirical research. A spectrum of possibly suitable ethical theories was, therefore, prepared before the beginning of the empirical research based on a literature review which focused on normative theories which had been applied to similar issues previously. Our empirical study then consisted of a triangulation of qualitative methods (interviews, non-participant observation and focus groups). During our empirical research, we were aware of the spectrum of ethical theories which could be applied for a normative analysis and we correlated this with our (preliminary) empirical results. The final selection of a theoretical background was not made until the qualitative research had progressed to a stage which allowed us to gather a first understanding of the most important empirical characteristics of the social phenomenon observed. The ethical theory subsequently selected had an influence on the further qualitative data gathering, while, at the same time, the empirical data collected were used to specify and adjust the ethical theory with respect to the particular context. Hence, in a circular process, the selection of an ethical framework was considered during the empirical research, while, simultaneously, the theory selection was influenced by our preliminary empirical results.

As a result of preliminary data collection and analysis, we identified the issue of respect for the patient’s will as being important in the different settings of decision-making (tumor conference, ward round, outpatient clinic). This finding could be related to concepts of patient autonomy from the medical-ethical literature. It also became obvious that the way in which patient autonomy is exercised in practice is very much dependent on structures of decision-making [ 31 ]. We, therefore, decided to select an ethical theory which did not conceptualize autonomy in a liberalistic way but stressed the importance of institutional structures which promote patient autonomy [ 39 ].

The selection of a normative background has to account for the purposes and design of the empirical-ethical research project

Empirical research in medical ethics can contribute to normative arguments in a variety of ways. Examples are the identification of morally relevant problems, the provision of facts important for normative arguments, the description of the actual conduct of a group of stakeholders [ 9 ], the analysis of moral concepts, the construction of a normative standpoint, or the implementation of an intervention to enhance the moral quality of a practice ([ 30 ], p. 45 f.). Therefore, the purposes and design of concrete empirical-ethical studies can vary to a considerable extent. One ethical theory might be more adequate than others, depending on the exact aims of the empirical-ethical research project. The idea of the “application” of an ethical theory to a concrete issue should be understood as having a three-sided relationship: something is applied to something to some end ([ 26 ], p. 58 f.). “Application,” thus, always carries a teleological momentum, as it is inherently related to a purpose to which the application is supposed to serve. The selection of a normative background, therefore, should take into account the purposes of the concrete research project.

There are some approaches in the spectrum of philosophical-ethical theories which seem to be quite well-fitted to develop clear normative guidance. Other approaches are better suited to providing rich and comprehensive descriptions of moral phenomena. This latter type can be designated as “normatively weak” approaches [ 40 ]. This does not mean that these theories are less elaborate. Instead, they are not primarily directed towards normative evaluation, but serve better for other purposes. One example of such a “normatively weak” approach would be narrative ethics [ 41 , 42 ]. Narrative ethics uses the medium of stories to better understand phenomena such as sickness, dependency or disability, which are of great importance for medical ethics. Therefore, it serves different aims than ethical theories which have been developed to serve concrete practical (e.g. political) purposes and, consequently, carry a stronger action-guiding character. Classical utilitarianism, as presented by John Stuart Mill, for example, is based very much on the author’s political ideas and targets concrete suggestions for an improvement of practice. Utilitarianism is, thus, more than narrative ethics directed towards normative evaluation. While both types of theory can contribute principally to an improvement of practice, their different aims and character should be considered in the selection of a normative background. If an empirical-ethical research project is, for example, supposed to contribute to a clear-cut normative evaluation, it would be advisable to select an ethical theory which allows for concrete action-guiding suggestions. If the project aims more at a fuller understanding of a moral phenomenon, a “normatively weak” approach, such as narrative ethics, might be the better choice.

In the ETHICO project, we had to face the challenge that the ethical theory selected was supposed to serve a variety of aims during the different stages of the research (see section  The ETHICO project (“Empirical-ethical interventions in oncology”) ). In the “deliberation” stage, for example, the ethical-theoretical framework was needed to guide the normative deliberation aiming at an ethically justified assessment of the moral problems which had been identified in the earlier project stages. Therefore, the ethical theory had to indicate what would be an appropriate solution to the moral deficits observed. Another example would be the “evaluation” stage, where a normative background was needed to determine whether the ethical quality of the practice had improved subsequent to the empirical-ethical intervention. The development of evaluation criteria is of great importance here. While there are some ethical theories (such as utilitarianism) which may even allow for a quantitative measurement, other theories (e.g. Kantian accounts) may not provide good opportunities for an empirical evaluation of the ethical quality of a practice.

The diverging aims of the different project stages finally led us to the conclusion that it would not be advisable to stick to one and the same ethical theory throughout the project. Instead, we drew on different normative backgrounds (partly in combination), depending on the purposes of each stage in the ETHICO project. During its first stages, which aimed at the identification and characterization of ethical problems, we stayed rather open and considered different theories which are suitable to deliver an empirically rich understanding of the moral phenomenon observed (such as narrative ethics or virtue ethics accounts). In stage three, “deliberation”, which aimed at the development of an ethically justified solution, we referred to O’Neill’s account on principled autonomy [ 39 ], which also had an influence on the intervention, which was then designed to establish trustworthy structures in the respective oncological department. In the final “evaluation” stage, we included, in addition to O’Neill’s account, Alan Gewirth’s theory, which considers basic structures of human agency [ 43 ]. We then tried to assess how far patients were enabled to exercise their right to self-determination in hospital,and whether they are hindered by factors, such as misinformation or a symptom burden, which do not allow them to exercise their autonomy.

The combination of diverse ethical theories and, thus, different approaches to ethical justification (e.g. virtues ethics and Kantian accounts) became necessary due to the comprehensive structure of the overall project, which is an empirical-ethical intervention study and, therefore, combines different aims. In addition, this approach mirrors our meta-theoretical view that the plurality of normative-ethical accounts corresponds to the variety within actual moral thinking. It was, therefore, necessary to draw on more than one ethical theory to deal adequately with the multifaceted phenomenon of decision-making in advanced cancer from a normative perspective.

The interrelation between the ethical theory selected and theoretical backgrounds of the socio-empirical research should be reflected

Empirical-ethical research includes socio-empirical inquiry as well as normative analysis. Hence, methodological requirements and theoretical backgrounds on “both sides” and their interrelationship should be taken into consideration throughout a project [ 36 ]. The choice of an ethical theory, therefore, has to consider the interrelation between the ethical theory selected and the theoretical backgrounds of the empirical research which forms part of the study. Empirical research methodologies carry certain normative assumptions which can be either in line or in tension with central ethical concepts. It has been stressed, for example, that quantitative surveys constitute society and subjects in a way in which the survey method can succeed [ 44 ]. This may contradict ethical notions, such as civic liberty or the person’s right to self-determination.

Besides this implicit normativity, socio-empirical research is laden with theoretical presuppositions about the social reality observed. Social theories make assumptions about what counts as a social phenomenon and which concepts should be regarded as core items in this context (e.g. agency, interaction or communication) ([ 45 ], p. 237). Such socio-theoretical presuppositions, which underlie empirical data gathering and analysis, and their relationship to the ethical theory selected should be respected when planning and conducting an empirical-ethical study [ 45 ]. There are some social theories which share the central premises of main philosophical-ethical theories, while others rest on assumptions which cannot be linked to ethics so easily. One “linking element” in the first respect would be the assumption that there are rational actors ([ 45 ], p. 239). This idea underlies, for instance, Grounded Theory methodology [ 46 ], which is based on the concept of interactiondeveloped by George Herbert Mead [ 47 ]. Regarding the idea of actors being the main concept, Grounded Theory stands in a “harmonic relationship” with most ethical theories which similarly focus on action ([ 45 ], p. 239). In other respects, a tension can be noticed when Grounded Theory is applied in the context of ethics. The understanding of reality, for example, as a social interaction with others and shared interpretative process is in contrast with those ethical theories which have a rather individualistic account of human agency.

The tension between ethics theory and social theory also becomes obvious when other socio-theoretical backgrounds are taken for empirical-ethical studies. One example in this respect would be systems theory, developed by Niklas Luhmann [ 48 , 49 ]. In systems theory, actors are not the main concept, therefore, it cannot be as easily aligned with ethical theory. We do not want to suggest that social theory and ethical theory should be congruent in all relevant features, however, researchers should be aware of whether the ethical theory they select harmonizes or stands in a strained relationship with the socio-theoretical presuppositions on which they draw. When this is not respected, there is a danger of missing ethically problematic issues if they are not in the focus of the social theory which underlies the empirical research.

We decided on a qualitative research methodology in the ETHICO project. We used an observational research method which is based on Symbolic Interactionism [ 50 ] in the “exploration” stage to investigate the clinical practice with regard to the question, which ethical problems are relevant and how can they be further characterized [ 31 ]. We tried to understand and reconstruct the meanings which certain objects and actions had for the research participants. An underlying presupposition of Symbolic Interactionism is the existence of intentional actors who stand in relationship with other actors and choose means to achieve their aims. This interaction is based on processes of mutual interpretation. The socio-theoretical model of intentional actors can be linked to many approaches from the spectrum of philosophical-ethical theories, which equally start from the assumption of purposeful human agency.

Therefore, the socio-theoretical framework selected gave us ample scope for the choice of an appropriate ethical theory in the ETHICO project. All ethical theories selected rest on the assumption of actors who make their choices on the basis of preferences and information. In addition, O’Neill’s non-individualistic account of autonomy stresses the importance of (social) structures and human interaction and can, therefore, serve as a linking element between the social-theoretical reconstruction and the ethical reflection on social practices in oncological decision-making.

Limitations

We generally tried to apply a transparent and reasoned approach regarding the choice of a normative background in the ETHICO project. However, the issue of theory selection appeared to be highly complex and led to a need for compromises, especially with regard to the tension between the theoretical well-foundedness of normative-ethical accounts and their applicability in practice. As displayed above, there are criteria inherent to an ethical theory which are important for judging its quality. If an ethical theory is insufficient with respect to clarity and coherence, it should not be applied in empirical-ethical research – even if it fits the designated subject very well, is in line with the overall aims of the project and matches the project’s socio-empirical research methodology very nicely. In addition to these three aspects which pertain to the appropriateness of a theory within a specific project, normative theories fulfil a critical function. When there is no theoretical justification apart from the suitability for the research, this external critical evaluation is not executed. A theoretical justification independent of the concrete context of application is crucial for empirical-ethical research to develop a critical stance towards the social practice observed. However, even the best-founded ethical theory cannot be successfully applied when any link between the theoretical notions and the empirical reality is missing. Researchers into the practice of empirical-ethical study should be aware of this ambivalence and the problems which arise from a preponderance of either the theory’s well-foundedness or its suitability within the concrete project.

A second important result emerging from our research is the need to apply diverging ethical theories within the different stages of the ETHICO project. This necessity mainly arises from the varying purposes we had during the research project: from the designation of an ethical problem up to the designing, implementation and evaluation of an intervention. While one ethical theory serves better for a deeper understanding of the relevant empirical phenomena, other normative backgrounds are better suited to measure the ethical quality of a social practice. This finding suggests that ethical theoriesin empirical-ethical research ethical theories cannot often be used in their “pure” form but have to be further modified to meet the requirements of the topic chosen and the aims of the respective research. A syncretic combination of different theories, which might be regarded as problematic from a philosophical perspective alone, can be justified in the context of an empirical-ethical research project. However, arbitrariness in theory selection should be avoided by explaining the reasons why certain approaches have been chosen, modified and combined for a concrete project.

In this paper, we aimed to stress that the specific set-up of empirical-ethical research necessitates a comprehensive reflection with regard to the selection of an ethical background theory. In contrast to traditional research in philosophical ethics, additional issues should be considered here which pertain to pragmatic aspects of conducting the research, as well as to more theoretical facets, such as the interrelation between the ethical-theoretical and the socio-theoretical background of the respective study.

It also became obvious that the requirements which are imposed on an ethical theory in the empirical-ethical field are rather high. Based on our practical experience in the ETHICO project, we have learned that there is probably no single ethical theory which fully accomplishes all the criteria relevant for theory selection. As a jack of all trades device consistent with all aspects discussed above will not be found in the spectrum of ethical theories, researchers should consider a modification or combination of different accounts. The overall design of an empirical-ethical study is a multi-faceted endeavor which has to balance between more theoretical and rather pragmatic considerations.

In summary, this paper aimed to show that a systematic and reasoned approach towards theory selection in empirical-ethical research should be given priority compared to an accidental or implicit way of choosing the normative framework for one’s own research (or to only referring to the researcher’s personal moral stances). The criteria discussed above may, therefore, serve as relevant points to consider when it comes to the matter of theory selection in the planning and conducting of an empirical-ethical research project.

Borry P, Schotsmans P, Dierickx K. The birth of the empirical turn in bioethics. Bioethics. 2005;19(1):49–71.

Article   Google Scholar  

Molewijk B. Integrated empirical ethics: in search for clarifying identities. Med Health Care Philos. 2004;7(1):85–7.

Leget C, Borry P, de Vries R. “Nobody tosses a dwarf!” The relation between the empirical and the normative reexamined. Bioethics. 2009;23(4):226–35.

Dunn M, Sheehan M, Hope T, Parker M. Toward methodological innovation in empirical ethics research. Camb Q Healthc Ethics. 2012;21:466–80.

Ebbesen M, Pedersen B. Using empirical research to formulate normative ethical principles in biomedicine. Med Health Care Philos. 2007;10(1):33–48.

Widdershoven G, Abma T, Molewijk B. Empirical ethics as dialogical practice. Bioethics. 2009;23(4):236–48.

Frith L. Symbiotic empirical ethics: a practical methodology. Bioethics. 2012;26(4):198–206.

Duwell M. Bioethics: Methods, Theories, Domains. London: Routledge Chapman & Hall; 2014.

Google Scholar  

de Vries R, Gordijn B. Empirical ethics and its alleged meta-ethical fallacies. Bioethics. 2009;23(4):193–201.

Molewijk B, Stiggelbout AM, Otten W, Dupuis HM, Kievit J. Empirical data and moral theory. A plea for integrated empirical ethics. Med Health Care Philos. 2004;7(1):55–69.

Sugarman J, Kass N, Faden R. Categorizing empirical research in bioethics: Why count the ways? Am J Bioeth. 2009;9(6–7):66–7.

Salloch S, Schildmann J, Vollmann J. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice. BMC Med Ethics. 2012;13(1):5.

Sugarman J, Sulmasy DP. Methods in Medical Ethics. Washington: Georgetown University Press; 2001.

Parfit D. On What Matters. Oxford: Oxford University Press; 2011.

Book   Google Scholar  

Knorr-Cetina K. The Manufacture of Knowledge. An Essay on the Constructivist and Contextual Nature of Science. Oxford: Pergamon Press; 1981.

Kuhn TS. The Structure of Scientific Revolutions. Chicago: University of Chicago Press; 2010.

Salloch S, Vollmann J, Schildmann J. Ethics by opinion poll? The functions of attitudes research for normative deliberations in medical ethics. J Med Ethics. 2014;40(9):597–602.

Beauchamp TL. Does ethical theory have a future in bioethics? J Law Med Ethics. 2004;32(2):209. –217, 190.

Kymlicka W. Moral philosophy and public policy: the case of NRTs. Bioethics. 1993;7(1):1–26.

O'Neill O. Applied ethics: naturalism, normativity and public policy. J of App Philos. 2009;26(3):219–30.

Arras J. Theory and bioethics. [ http://plato.stanford.edu/entries/theory-bioethics/ ]

Nida-Rümelin J. Theoretische und Angewandte Ethik: Paradigmen, Begründungen, Bereiche. In: Nida-Rümelin J, editor. Angewandte Ethik. Die Bereichsethiken und ihre theoretische Fundierung. Stuttgart: Alfred Kröner; 2005. p. 2–87.

Ives J, Dunn M. Who's arguing? A call for reflexivity in bioethics. Bioethics. 2010;24(5):256–65.

Birnbacher D. Ethics and social science: which kind of cooperation? Ethical Theory Moral Pract. 1999;2(4):319–36.

Düwell M. Wofür braucht die Medizinethik empirische Methoden? Eine normativ-ethische Untersuchung. Ethik Med. 2009;21(3):201–11.

Ott K. Strukturprobleme angewandter Ethik und Möglichkeiten ihrer Lösung. In: Ott K, editor. Vom Begründen zum Handeln. Aufsätze zur angewandten Ethik. Tübingen: Attempto; 1996. p. 51–85.

Campbell A, Gillett G, Jones G. Medical Ethics. South Melbourne: Oxford University Press; 2005.

Mepham B. Bioethics. An Introduction for the Biosciences. Oxford, New York: Oxford University Press; 2008.

Stoecker R, Neuhäuser C, Raters M-L. Handbuch Angewandte Ethik. Stuttgart: J. B. Metzler; 2011.

Birnbacher D. Welche Ethik ist als Bioethik tauglich? In: Ach JS, Gaidt A, editors. Herausforderungen der Bioethik. Stuttgart-Bad Cannstatt: Frommann-Holzboog; 1993. p. 45–67.

Salloch S, Ritter P, Wäscher S, Vollmann J, Schildmann J. Medical expertise and patient involvement: a multiperspective qualitative observation study of the patient's role in oncological decision making. Oncologist. 2014;19(6):654–60.

Düwell M. Handbuch Ethik. Stuttgart: Metzler; 2006.

Beauchamp TL, Childress JF. Principles of Biomedical Ethics. New York: Oxford University Press; 2013.

Brock DW. Truth or consequences: the role of philosophers in policy-making. Ethics. 1987;97(4):786–91.

de Vries R. How can we help? From “sociology in” to “sociology of” bioethics. J Law Med Ethics. 2004;32(2):279–92.

Mertz M, Inthorn J, Renz G, Rothenberger LG, Salloch S, Schildmann J, et al. Research across the disciplines: a road map for quality criteria in empirical ethics research. BMC Med Ethics. 2014;15:17.

Dietrich J. Die Kraft der Konkretion oder: Die Rolle deskriptiver Annahmen für die Anwendung und Kontextsensitivität ethischer Theorie. Ethik Med. 2009;21(3):213–21.

Sen A. Capability and well-being. In: Nussbaum M, Sen A, editors. The Quality of Life. Oxford: Oxford University Press Editors; 1993. p. 30–53.

Chapter   Google Scholar  

O’Neill O. Autonomy and Trust in Bioethics. Cambridge: Cambridge University Press; 2008.

Werner MH. Schwach normative und kontextualistische Ansätze. In: Düwell M, Hübenthal C, Werner MH, editors. Handbuch Ethik. Stuttgart Weimar: J.B. Metzler; 2011. p. 191–3.

Brody H. Stories of Sickness. Oxford: Oxford University Press; 2003.

Charon R. Narrative Medicine. Honoring the Stories of Illness. Oxford: Oxford University Press; 2006.

Gewirth A. Reason and Morality. Chicago: University of Chicago Press; 1995.

Ashcroft RE. Constructing empirical bioethics: Foucauldian reflections on the empirical turn in bioethics research. Health Care Anal. 2003;11(1):3–13.

Graumann S, Lindemann G. Medizin als gesellschaftliche Praxis, sozialwissenschaftliche Empirie und ethische Reflexion: ein Vorschlag für eine soziologisch aufgeklärte Medizinethik. Ethik Med. 2009;21(3):235–45.

Corbin J, Strauss A. Basics of Qualitative Research. Los Angeles: Sage; 2008.

Mead GH. Mind, Self, and Society from the Standpoint of a Social Behaviorist. Reprint Chicago: University of Chicago Press; 2009.

Luhmann N. Zweckbegriff und Systemrationalität. J.C.B. Mohr: Tuebingen; 1968.

Nassehi A. Die Praxis ethischen Entscheidens. Eine soziologische Forschungsperspektive. Zeitschrift für medizinische Ethik. 2006;52(4):367–77.

Flick U. An Introduction to Qualitative Research. Los Angeles: Sage; 2009.

Download references

Acknowledgements

This publication is a result of the work of the NRW Junior Research Group “Medical Ethics at the End of Life: Norm and Empiricism” at the Institute for Medical Ethics and History of Medicine, Ruhr University Bochum, which is funded by the Ministry for Innovation, Science and Research of the German state of North Rhine-Westphalia.

We acknowledge support by the German Research Foundation and the Open Access Publication Funds of the Ruhr-Universität Bochum.

Author information

Authors and affiliations.

Institute for Medical Ethics and History of Medicine, Ruhr University Bochum, NRW Junior Research Group “Medical Ethics at the End of Life: Norm and Empiricism”, Malakowturm – Markstr. 258a, D-44799, Bochum, Germany

Sabine Salloch, Sebastian Wäscher, Jochen Vollmann & Jan Schildmann

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sabine Salloch .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

SS, SW, JV and JS conducted the empirical study which forms the background for the arguments developed in this article. SS, SW and JS performed the empirical data collection. SW, JV and JS contributed to the development of the article’s main arguments. SS drafted the manuscript. JS and SW critically revised the manuscript. All authors have read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Salloch, S., Wäscher, S., Vollmann, J. et al. The normative background of empirical-ethical research: first steps towards a transparent and reasoned approach in the selection of an ethical theory. BMC Med Ethics 16 , 20 (2015). https://doi.org/10.1186/s12910-015-0016-x

Download citation

Received : 10 November 2014

Accepted : 25 March 2015

Published : 04 April 2015

DOI : https://doi.org/10.1186/s12910-015-0016-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Empirical-ethical research
  • Research methods
  • Ethical theory
  • Ethical interventions

BMC Medical Ethics

ISSN: 1472-6939

hypothesis normative research

  • Open access
  • Published: 19 December 2017

What methods do reviews of normative ethics literature use for search, selection, analysis, and synthesis? In-depth results from a systematic review of reviews

  • Marcel Mertz 1 ,
  • Daniel Strech 1 &
  • Hannes Kahrass 1  

Systematic Reviews volume  6 , Article number:  261 ( 2017 ) Cite this article

6263 Accesses

20 Citations

20 Altmetric

Metrics details

(Semi-)systematic approaches to finding, analysing, and synthesising ethics literature on medical topics are still in their infancy. However, our recent systematic review showed that the rate of publication of such (semi-)systematic reviews has increased in the last two decades. This is not only true for reviews of empirical ethics literature, but also for reviews of normative ethics literature. In the latter case, there is currently little in the way of standards and guidance available. Therefore, the methods and reporting strategies of such reviews vary greatly. The purpose of the follow-up study we present was to obtain deeper methodological insight into the ways reviews of normative literature are actually conducted and to analyse the methods used.

Our search in the PubMed, PhilPapers, and Google Scholar databases led to the identification of 183 reviews of ethics literature published between 1997 and 2015, of which 84 were identified as reviews of normative and mixed literature. Qualitative content analysis was used to extract and synthesise descriptions of search, selection, quality appraisal, analysis, and synthesis methods. We further assessed quantitatively how often certain methods (e.g. search strategies, data analysis procedures) were used by the reviews.

The overall reporting quality varies among the analysed reviews and was generally poor even for major criteria regarding the search and selection of literature. For example, only 24 (29%) used a PRISMA flowchart. Also, only 55 (66%) reviews mentioned the information unit they sought to extract, and 12 (14%) stated an ethical approach as the theoretical basis for the analysis. Interpretable information on the synthesis method was given by 47 (60%); the most common methods applied were qualitative methods commonly used in social science research (83%).

Reviews which fail to provide sufficient relevant information to readers have reduced methodological transparency regardless of actual methodological quality. In order to increase the internal validity (i.e. reproducibility) as well as the external validity (i.e. utility for the intended audience) of future reviews of normative literature, we suggest more accurate reporting regarding the goal of the review, the definition of the information unit, the ethical approach used, and technical aspects.

Peer Review reports

The number of publications of (semi-)systematic reviews for finding and synthesising ethics literature has increased in the last two decades [ 1 ]. This is not only true for reviews that search and synthesise ethically relevant empirical literature, but also for reviews of normative literature. Similarly, the scholarly debate on the opportunities and limitations of (systematic) reviews in ethics has expanded. This is particularly true for reviews of normative literature [ 2 , 3 , 4 ], which is also reflected in philosophy, one of the disciplines most concerned with ethics [ 5 ].

Since established standards for the conduct of especially systematic reviews of normative literature are still lacking, in a recent study [ 1 ], we assessed how reviews of normative literature report on their methods for search, selection (including quality appraisal), analysis, and synthesis of the literature and also compared the reporting to an (adapted) version of criteria related to PRSIMA statements [ 6 ]. Proposals for systematic review methods are rare and rather vague when it comes to analysis and synthesis of ethics literature [ 7 , 8 ]. The more comprehensive and technical manuals on review methodology [ 9 ], guideline development [ 10 , 11 , 12 ], and Health Technology Assessments [ 13 , 14 , 15 , 16 ] also lack explanation of how to search, analyse, and synthesise relevant information from the ethics literature in a systematic and transparent manner (cf. [ 17 ]).

Search methods and, to a certain degree, selection methods for (systematic) reviews of normative literature can use tried and tested methods for general (systematic) reviews with the important difference that in ethics, books and book chapters can have much relevance, which has to be reflected in a search, selection, and also quality appraisal strategy. However, the experience of researchers undertaking such reviews is often that the desired normative literature is not easy to find [ 18 ]. Defining inclusion and exclusion criteria can also be harder than in, for example, reviews of clinical studies. As far as analysis and synthesis methods are concerned, it is becoming clear that qualitative research approaches are much more relevant in this kind of review than in “traditional” systematic reviews, such as those of clinical studies [ 19 ], but there is a lack of meta-research to show which methods can be employed in reviews of normative literature and how.

Therefore, in order to obtain deeper insight into how the reviews searched for selected, analysed, and synthesised normative information, in this paper, we take a closer look at the specific steps and processes used and the methodological information reported. Thereby, our analysis aimed both to identify common (“uniform”) steps or strategies in the search, selection, analysis, and synthesis methods of reviews on normative literature, and at the same time to capture existing variance in these methods. Both are needed to understand the current state of the art of such reviews and to provide the basis for future methodological improvement.

Study registration and PRISMA checklist

No review protocol was published beforehand, and the review was not registered (e.g. with PROSPERO). The description of methods follows the PRISMA statement [ 6 ] as far as applicable to this kind of review (see Additional file  1 for PRISMA checklist).

Search and selection

The search and selection method we used to identify (semi-)systematic reviews of ethics literature is described in more detail in [ 1 ]. In summary, our review of reviews was based on two PubMed searches, with additional searches in PhilPapers and Google Scholar, in April 2015. The searches produced 1393 hits, of which 189 were deemed relevant based on title or abstract and 183 after full-text screening. (See Fig.  1 ). Only articles which focused on the normative and “mixed” (empirical and normative) literature ( n  = 84) were analysed, leaving aside reviews that solely focused on empirical literature ( n  = 99). However, we also included “empty reviews” which explicitly stated that they found or included 0 hits if their research question matched our inclusion criteria. Because of language barriers, only articles in English, German, or French were considered for in-depth analysis.

PRISMA flow diagram (originally published in [ 1 ]; see for further details also [ 1 ])

Development of the coding matrix

We used a combined deductive and inductive strategy to construct categories [ 20 , 21 ] for a coding matrix (coding frame). The deductive component encompassed the overarching categories related to the different basic methodical steps of a review, i.e. search, selection, quality appraisal, analysis, and synthesis, relative to the PRISMA statement [ 6 ].

To develop fine-grained subcategories, an inductive strategy was employed. Therefore, introduction and method sections (or equivalent textual parts) of a purposive sample of n  = 20 articles were read. Descriptions of search, selection, quality appraisal, analysis, and synthesis methods were extracted. First, two researchers (MM, HK) each analysed five reviews of normative and five of mixed literature, discussed their suggested revisions, and finally agreed on refinements of the preliminary coding matrix. This was done in two consecutive rounds including ten more articles. Then, the plausibility of the categories developed was checked by the third researcher (DS). A further random sample of 10 articles was then examined to identify potentially overlooked issues. Because no subcategory was added, categorical saturation had been reached and the coding matrix was deemed final.

Main analysis of the individual reviews

To analyse methods applied by the reviews, qualitative content analysis (QCA) [ 20 , 21 ] was used. The analysis was independently conducted for all included reviews by two researchers (MM, HK). Usually, only explicit statements about search, selection, quality appraisal, analysis, and/or synthesis were considered as reported. However, when both researchers independently judged that a certain method was used implicitly in a review, this was also coded accordingly. The analysis employed closed (yes/no) and open answer modes. Any disagreement was discussed with the third researcher (DS) and agreement sought. We did not send the extracted data to the authors of the reviews to check if they agreed with our analysis mainly due to time constraints. Also, an earlier pilot study trying to do such a “member check” showed a rather low response rate [ 22 ], which casts at least some doubts on the success of a similar strategy in this review.

The synthesis method for the information extracted was mainly quantitative, e.g. how often the date/period of the search was stated (closed answer mode). In categories with open modes of data, the individual answers were counted (e.g. about which databases were used in a review) or subsumed under a broader description (e.g. approaches to becoming acquainted with the text). For the latter, we defined new synthesis categories corresponding either to what the authors of the reviews explicitly stated or to what we, on the basis of our experience and expertise in ethics and empirical methodology, found the most appropriate interpretation of what was said about the methods or procedures.

In our systematic review, we identified 84 reviews published between 1997 and 2015 in 65 different journals [ 1 ]. We included semi-systematic reviews that had at least an identifiable description of a reproducible literature search (search and selection) as well as (full) systematic reviews that further explicitly or implicitly reported on analysis and synthesis. When comparing the reporting to an (adapted) version of criteria related to PRISMA statements [ 6 ], only a small fraction of the included reviews fulfilled all criteria (for search 8%, for selection 21%, for analysis 8%, and for synthesis 11%) [ 1 ].

Methods used for search, selection, and quality appraisal

Most reviews stated the databases or search engines they used ( n  = 78, 93%) and the search terms ( n  = 73, 87%). Search strings were mentioned to a lesser extent ( n  = 33, 39%); the statement was not counted if the information about the search string did not allow replication of the search. If it was replicable, we differentiated between sufficient information (e.g. Boolean operators for search terms) and the statement of a “copy-paste” search string (“original” search string). Of these 33 reviews, 26 (78%) presented a search string that allows for copying and pasting it directly into the database or search engine. (See Fig.  2 .)

Search methods used in the reviews

Search restrictions were mentioned by 50 reviews (59%), with publication date restrictions ( n  = 42, 50%) and language restrictions ( n  = 11, 13%) mentioned most often. Even though exclusion criteria were stated less often ( n  = 50, 59%) than inclusion criteria ( n  = 60, 71%), only 14 (17%) of the reviews stated neither inclusion nor exclusion criteria. Hardly any reviews made clear whether there was a difference regarding the selection criteria and/or selection procedure between selection at the title/abstract level and at the full-text level ( n  = 6, 7%). (See Figs.  2 and 3 ).

Of the 20 reviews (24%) that addressed the issue of quality appraisal of the literature they included, one in four ( n  = 5, 25%) explicitly wrote that they disregarded quality appraisal procedures and also stated an explicit reason for this by mentioning that there are no usable or suitable methods or criteria for a quality appraisal of normative literature. There were also few reviews ( n  = 2) that stated without further explanation that they did not apply a quality appraisal, making it difficult to understand the rationale for forgoing quality appraisal. The remaining reviews ( n  = 13) mentioned how they conducted the quality appraisal. (See Fig.  3 .)

Selection methods used in the reviews

Finally, most reviews stated the number of results retrieved ( n  = 50, 59%) and included ( n  = 63, 75%). This number reflects the mentioned number, irrespective of whether the numbers included duplicates and further search strategies. About a third ( n  = 24, 29%) of the reviews used a PRISMA flowchart to represent the search and selection procedure. (See Fig.  4 .)

Representation of search and selection results used in the reviews

Databases/search engines used

Of the 84 reviews, 78 (93%) stated which databases or search engines they used for the search (multiple responses possible). Overall, there were 108 different databases or search engines used, though they were not equally popular. Of the 78 reviews, nearly all used at least PubMed / MEDLINE ( n  = 76, 97%). Other databases or search engines often mentioned were CINAHL ( n  = 30, 38%), EMBASE ( n  = 20, 26%), PsycINFO ( n  = 19, 24%), and Web of Science ( n  = 17, 22%). Many databases and search engines were mentioned only once ( n  = 68, 87%). Furthermore, most reviews used at least two databases or search engines ( n  = 62, 74% of all reviews, 79% of the reviews that stated databases/search engines). Of the 16 reviews that relied on one database or search engine, 14 (88%) used PubMed/MEDLINE (see Fig.  5 ).

Databases/search engines used in the reviews

Methods used for analysis and synthesis

In our sample of 84 reviews, 55 (66%) mentioned what kind of normative information unit they sought to extract from the material and synthesise thereafter. Twenty-one (25%) stated the theoretical approach they used to define information units (e.g. methods of qualitative analysis such as qualitative content analysis or grounded theory, definitions of philosophical arguments, or sociological theories regarding organisational justice), and of these 21, 12 (57%) gave sufficient information regarding the use of an ethical approach (e.g. ethical theory, framework, principles etc.). (See Fig.  6 ).

Analysis methods used in the reviews (information units)

For the information unit we differentiated between (1) ethical issues/topics/dilemmas, (2) ethical arguments/reasons, and (3) ethical principles/values/norms. Of the 55 reviews stating the information unit, half extracted ethical issues, topics or dilemmas as information units ( n  = 28, 51%), a quarter extracted ethical arguments or reasons ( n  = 14, 25%), and a further quarter ethical principles, values, or norms ( n  = 14, 25%). Of the 12 reviews mentioning the ethical approach, “principlism” was dominant ( n  = 5, 42%). (See Fig.  6 .)

Thirty-one (37%) reviews mentioned the procedure used to extract the information. Most used a “coding and categorising” approach ( n  = 9, 29%); in another nine reviews (29%), we found their statements to be too unclear to place them conclusively in one of the three procedure types that we were able to identify in our analysis. (See Fig.  7 .) On the other hand, 43 (51%) said something about the overall analysis procedure, e.g. how many researchers were involved and whether there were, for example, consensus rounds between researchers if they disagreed with an analysis result. Of these 43, 33 (77%) seemed to involve more than one researcher in the analysis process; 5 (12%) stated that the analysis was done only by one researcher (of which 1 review was written by a sole author, which means that the entire analysis in the other 4 reviews, although written by two or more authors, was performed nonetheless by one researcher). (See Fig.  7 .)

Analysis methods used in the reviews (information extraction)

Of our sample, 47 (60%) reviews gave interpretable information about the way they synthesised the analysed material. Of these 47, 39 (83%) used qualitative methods (as mainly understood in empirical social science research), 3 (6%) used quantitative methods, 2 (5%) used both qualitative and quantitative methods, and 3 (6%) reported some kind of narrative or hermeneutic methods (as understood in humanities traditions). (See Fig.  8 .)

Synthesis methods used in the reviews

Regarding qualitative methods ( n  = 41), 12 (29%) employed a purely deductive approach (a priori defined categories, e.g. based on a theory or framework), 18 (44%) a purely inductive approach (a posteriori built categories), and 11 (27%) combined deductive and inductive approaches.

Deductive approaches mostly relied on existing thematic groupings of the debate or literature ( n  = 14, 34% of the 41 reviews using a qualitative approach), while some involved a theory or another existing category system ( n  = 8, 20%). One review re-applied the categories from the analysis process for the synthesis (2%). Inductive approaches were often not interpretable regarding a specific approach; these we categorised under “Unspecified Thematic Analysis” ( n  = 18, 44%). The other reviews with inductive approaches used qualitative content analysis ( n  = 9, 22%), grounded theory ( n  = 1, 2%), or a focus group ( n  = 1, 2%). Here again, we took implicit as well as explicit information into account. (See Fig.  8 .)

Reviews that used a quantitative approach ( n  = 5) to synthesis mostly relied on statistical analysis ( n  = 4, 80% of the 5 reviews using a quantitative approach), be it a correlation analysis or an analysis of the distribution of the topics (both n  = 2, 40%). One review (20%) used a quantitative approach that relied not on a specific statistical analysis, but was mere counting. (See Fig.  8 .)

In this paper, based on a systematic review of reviews, we present detailed findings on how 84 reviews of normative or “mixed” literature report their methods for the search, selection, analysis, and synthesis of relevant information.

Applied search and selection methods

The search and selection methods used correspond to the items in the PRISMA guidelines [ 1 , 6 ]. Therefore, there are not many qualitative differences from methods of more established kinds of (semi-)systematic reviews. It is known, though, that there are some practical complications in both searching [ 18 ] and selecting normative literature. These include the interdisciplinarity of the field and the resultant variation in terminology, publication standards, and journals [ 18 ], which might be reflected in the use of many databases ( n  = 108) in the sample of reviews in our meta-review.

Possibly because of this, only a few of the reviews used one database or search engine (21%). One might argue that using more databases and search engines leads to better results. However, it is unclear whether this hypothesis holds true for reviews of normative literature. On the basis of our analysis, we cannot make any statement about the effectiveness of using several databases and search engines. Better reporting of the attribution of included results and duplicate search results from different sources would be necessary in order to prove this hypothesis.

PubMed/MEDLINE was by far the most common database (97%). On the basis of our data, however, we cannot elucidate the reasons for its prevalence. It might be that PubMed/MEDLINE is deemed a relevant and fruitful database not only for biomedical topics, but also for bioethical topics as well. It could also be that PubMed/MEDLINE is regarded as a “standard” database that is commonly used in systematic reviews related to health care and is thus preferred. Another explanation, namely that other databases/search engines are not widely known, seems unsubstantiated in the face of the large number mentioned (see Fig.  5 ).

Given the possible problems in searching literature and given the humanities tradition of ethics as a “book” discipline, it is interesting that additional search strategies are not so widespread. About 60% of the reviews stated that they used such strategies, with “hand search” (e.g. manually checking monographs, collections) being mentioned by 26%; most mentioned was “snowballing”/“reference check” (66%). Though there is no evidence currently that hand search can improve the amount or quality of normative information gathering, it seems plausible to assume that in fields such as ethics—which traditionally rely on book publications which are harder (or impossible) to find with some of the common databases/search engines—such additional search strategies would yield greater returns than in, e.g., systematic reviews of clinical interventions.

Applied analysis and synthesis methods

There is a clearly discernible tendency to favour qualitative approaches in both analysis and synthesis in our sample of reviews of normative literature (with “qualitative approaches”, we refer to an overarching category that encompasses both qualitative (social science) methods and narrative or hermeneutic methods insofar as they are both non-quantitative methods). In the 31 reviews that reported information extraction (analysis) procedures, “coding and categorising” dominates (29%); but “close reading” was almost as popular (19%). “Collecting” (23%) could also be understood as a quantitative way of information extraction; most often, though, our observations indicated that it too was used in a qualitative way.

Furthermore, of the 47 reviews that stated or described their synthesis methods, 44 (94%) used qualitative methods ( n  = 41, 87%) or narrative/hermeneutic methods ( n  = 3, 6%). Quantitative methods were much less represented ( n  = 5, 11%) (see Fig.  8 ). This tendency towards the use of qualitative approaches for reviews of normative information is supported by the findings of Tricco et al., who reviewed emerging knowledge synthesis methods for complex interventions in health care [ 19 ].

When using qualitative methods, the authors of the analysed reviews separately applied inductive or a posteriori categorisation (71%), and deductive or a priori categorisation (56%), or used a mixed approach (27%). When using deductive categorisation, a third (35%) relied on an (ethical) theory or existing category system, which is interesting given the fact that ethical theories in particular offer ample opportunities to define a priori categories for synthesising normative information. The presence of thematic analysis that is not specified in more detail (62%) and qualitative content analysis (31%) makes clear, however, that at the same time inductive strategies might often be better suited for this task. Arguably, this also depends on the goals of the review and the type of information unit sought.

Further research is needed to assess whether the synthesis of information units such as ethical issues works better with the use (additionally) of deductive/a priori categories, or if in general, the openness of inductive strategies is essential for finding suitable synthesis categories. Arguably, this depends on synthesis objectives. Here, one might differentiate two broad objectives: first, reproducibility (securing “internal validity”); second, “ease of use” and/or utility for the intended audience (securing “external validity”). “Ease of use”/utility in this regard might also be understood as “ecological validity”, that is the applicability of the results to the real-world setting of practitioners working in the health care system.

The overall tendency towards qualitative approaches in form of text analysis and synthesis seems natural, as normative information is extracted from texts and is itself textual or conceptual information. This might be a reason why it is not always explicitly stated. But as there is some variety regarding methods, in particular for the synthesis (see above), it is still important information for readers of such reviews.

Reporting quality and transparency

The variance mentioned above regarding qualitative approaches shows that not only the methods used varied among the reviews in our sample, but also the reporting of the methods varied greatly—even for the well-established standardised search and selection part of the reviews (see Figs.  2 and 3 ). The more topic-specific analysis and synthesis parts of the reviews did not fare much better (see results in Figs.  6 , 7 , and 8 ).

However, it is important to note that the lack of a statement does not necessarily imply methodological negligence. Sometimes, not having stated something or, more particularly, not having done something, can make perfect sense given the aim of a review, the time available, and the limits of publication (e.g. article length). For example, in a review with a purely descriptive purpose of developing a comprehensive spectrum of ethical issues at stake in a specific health care situation, bypassing quality appraisal of the included literature can be appropriate [ 23 , 24 ]. However, to demonstrate awareness of this key element of a traditional systematic review, a possible minimal standard could be to say why no quality appraisal was done. Hereto, our review reveals insufficient reporting, as only 24% included any information about quality appraisal.

Some information essential for the reproduction of systematic reviews was lacking in many reports. Although databases and search terms were often reported, we found that 61% of reviews did not give enough information on the search string to reproduce the search, and 59% reported on search restrictions. Even more important, a definition of the used information unit was given by only 25% of authors. In such cases, we would argue that the reviews do not provide enough relevant information to the reader, which therefore reduces their methodological transparency (e.g. impeding an external assessment of their “internal validity”), regardless of the actual quality of the review conducted.

Further, there is information we would not describe as key to transparency which would nevertheless benefit readers. For example, our review indicated that only three reviews (4%) provided a separate full list of the included publications, besides mentioning them in the normal references of the article. Such a list can be most useful, as it makes it easier to identify the literature included. Because of practical limitations such as article length, authors might consider providing further information in an online supplement.

Further developments towards best practice standards

Improved reporting in ethics reviews could increase overall methodological transparency, and thereby methodological quality, formally and informally. This is because an author who has to describe the methods used in meaningful detail will probably reflect more upon these methods and their use. Furthermore, a reader can be sensitised to questions of quality by reading a detailed report of methods. Finally, more explicit reporting enables formal analysis of how methods are used and how they could be improved.

To this end, having first identified different methods to conduct such reviews, a next step would be to define quality criteria or develop a reporting guideline for systematic reviews of normative literature. The fact that some reviews did not explicitly describe the methods used also shows that there might be a lack of relevant information for untrained readers of such reviews, who are unable to interpret the text to reconstruct the missing information. This can be a further motivation for reporting guidelines, as they can not only orient researchers conducting ethics reviews, but also the often interdisciplinary readership of this sort of review. The Q-SEA instrument [ 25 ], though developed for the quality assessment of ethical analyses in the context of Health Technology Assessment (HTA), could be understood in part as such a guideline, as it describes key issues of (systematic) literature search and inclusion/exclusion criteria, and sees the subsequent reporting of these issues as key to the process quality of an ethical analysis in HTA.

We would argue that when reporting on reviews of normative literature more transparency is particularly important regarding the following methods/parts of a review in order to increase “internal validity” as well as “external validity”:

Goals of the review : goals definitely play a role in the way the synthesis is framed for the intended audience (academics, HTA professionals, guideline development groups etc.). An ethics review can be descriptive (e.g. “What are the issues?”) as well as prescriptive (“What should one do?”). Both are valid goals for such reviews, but have to be stated clearly as they can influence the methodological aspects of a review.

Information unit definition: the normative information analysed and synthesised in the reviews should be stated more explicitly, e.g. ethical issues, arguments or principles and values.

Ethical approach: stating the theoretical approach (e.g. ethical theory, framework, set of principles or values) used for defining the information unit to analyse and synthesise improves the understanding of these steps and clarifies some of the inevitable normative underpinnings.

Technical aspects: systematic reviews should strive to describe all information needed to reproduce all steps performed to produce the reported results. Furthermore, the overall use of PRISMA flowcharts could be improved.

Limitations

The analysis procedure was dependent on our background knowledge and our interpretation of the texts. We tried to counter the subjectivity of this approach by double assessment (MM, HK) and by a critical review of the intermediate results (DS). However, this does not give absolute protection against mistakes, or guarantee that our value judgements are shared by every reader. Therefore, there might be some leeway regarding the (synthesis) categories themselves as well as the placing of the methods used in reviews in specific (synthesis) categories. Also, because of time constraints, we did not check for inter-rater reliability by having two (or more) researchers analyse the same literature. We did this for some articles at the beginning, with the aim not to improve the quality of the data, but to identify problems in applying the analysis matrix, to refine category descriptions, and to formulate (further) decision rules where necessary.

Conclusions

Together with our first paper [ 1 ], this paper is, to our knowledge, the first analysis of the state-of-the-art of (systematic) reviews of normative and mixed ethics literature on medical topics. It provides an in-depth view of the search, selection, quality appraisal, analysis, and synthesis methods used by such reviews. This information could be used to inform future reviews (e.g. what aspects of the method to report, what databases to use, approaches to use for synthesis, and so on), to write methodologies on conducting reviews in ethics, or to develop reporting guidelines for such reviews. The results of our study indicate that pursuit of the latter two goals especially would be worthwhile for improving the quality of conduct, transparency of reporting, and feasibility of evaluating reviews of normative literature.

Abbreviations

Health Technology Assessment

Mertz M, Kahrass H, Strech D. Current state of ethics literature synthesis: a systematic review of reviews. BMC Med. 2016;14:152.

Article   PubMed   PubMed Central   Google Scholar  

Davies R, Ives J, Dunn M. A systematic review of empirical bioethics methodologies. BMC Med Ethics. 2015;16:15.

McDougall R. Systematic reviews in bioethics: types, challenges, and value. J Med Philos. 2014;39:89–97.

Article   PubMed   Google Scholar  

Sofaer N, Strech D. The need for systematic reviews of reasons. Bioethics. 2012;26:315–28.

Polonioli A. A plea for minimally biased empirical philosophy. Birmingham: University of Birmingham; 2017.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement. PLoS Med. 2009;6:e1000097.

McCullough LB, Coverdale JH, Chervenak FA. Constructing a systematic review for argument-based clinical ethics literature: the example of concealed medications. J Med Philos. 2007;32:65–76.

Strech D, Sofaer N. How to write a systematic review of reasons. J Med Ethics. 2012;38:121–6.

Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions . 2011.

Google Scholar  

NICE. Developing NICE guidelines: the manual. http://www.nice.org.u/article/pmg20 . Accessed 30 Mar 2016. London: National Institute for Health and Care Excellence (NICE); 2014.

AWMF: AWMF-Regelwerk Leitlinien. online (Accessed 01 Dec 2016): Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften e.V. (AWMF); 2012.

SIGN. A guideline developer’s handbook. Edinburgh: SIGN; 2014.

DIMDI: Handbuch DAHTA - Ziele, Inhalte und Arbeitsweisen der Deutschen Agentur für Health Technology Assessment des DIMDI Deutsches Institut für Medizinische Dokumentation und Information (Geschäftsbereich BMG); 2013.

INAHTA. A checklist for health technology assessment reports. In INAHTA checklist, vol. Version 3.2. Edmonton: INAHTA Secretariat; 2007.

CRD. Systematic reviews: CRD's guidance for undertaking reviews in healthcare. New York: University of York NHS Centre for Reviews & Dissemination; 2009.

IQWiG: Allgemeine Methoden: Version 5.0. 2017.

Knüppel H, Mertz M, Schmidhuber M, Neitzke G, Strech D. Inclusion of ethical issues in dementia guidelines: a thematic text analysis. PLoS Med. 2013;10:e1001498.

Droste S. Systematische Gewinnung von Informationen zu ethischen Aspekten in HTA-Berichten zu medizinischen Technologien bzw. Interventionen. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2008;102:329–41.

Tricco AC, Soobiah C, Antony J, Cogo E, MacDonald H, Lillie E, Tran J, D'Souza J, Hui W, Perrier L, et al. A scoping review identifies multiple emerging knowledge synthesis methods, but few studies operationalize the method. J Clin Epidemiol. 2016;73:19–28.

Schreier M: Qualitative content analysis in practice . 2012.

Mayring P. Qualitative Inhaltsanalyse : Grundlagen und Techniken. Weinheim: Beltz; 2010.

Book   Google Scholar  

Mertz M, Sofaer N, Strech D. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons. BMC Medical Ethics. 2014;15:69.

Seitzer F, Kahrass H, Neitzke G, Strech D. The full spectrum of ethical issues in the care of patients with ALS: a systematic qualitative review. J Neurol. 2016;263:201–9.

Article   CAS   PubMed   Google Scholar  

Mertz M. Qualitätsbewertung in systematischen Übersichtsarbeiten normativer Literatur. Eine Problemanalyse. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2017;127-128:11-20.

Scott AM, Hofmann B, Gutierrez-Ibarluzea I, Bakke Lysdahl K, Sandman L, Bombard Y. Q-SEA—a tool for quality assessment of ethics analyses conducted as part of health technology assessments. GMS Health Technol Assess. 2017;13:Doc02.

PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

Not applicable.

This study did not receive specific funding from public, commercial or non-profit-organisations.

Availability of data and materials

Author information, authors and affiliations.

Institute of History, Ethics and Philosophy of Medicine, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625, Hannover, Germany

Marcel Mertz, Daniel Strech & Hannes Kahrass

You can also search for this author in PubMed   Google Scholar

Contributions

MM was the first author of the systematic review this paper is based on, wrote the main draft of the paper, analysed most of the material presented, and revised and finalised the manuscript. HK was the second author in the systematic review this paper is based on, assisted in analysing the material presented, and contributed to the writing and revising of the manuscript. DS was the senior author of the systematic review this paper is based on, contributed to the writing and revising of the manuscript. All authors designed the study and approved the final manuscript.

Corresponding author

Correspondence to Marcel Mertz .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

Financial competing interest: The authors declare that they have no competing interests. Non-financial competing interest: In three reviews finally included in the meta-review, DS was one of the authors. In one review, MM and HK were co-authors.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

PRISMA checklist. (DOC 67 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Mertz, M., Strech, D. & Kahrass, H. What methods do reviews of normative ethics literature use for search, selection, analysis, and synthesis? In-depth results from a systematic review of reviews. Syst Rev 6 , 261 (2017). https://doi.org/10.1186/s13643-017-0661-x

Download citation

Received : 12 May 2017

Accepted : 06 December 2017

Published : 19 December 2017

DOI : https://doi.org/10.1186/s13643-017-0661-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature review
  • Normative literature
  • Argument-based-literature
  • Empirical ethics
  • Evidence-based medicine

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

hypothesis normative research

OPINION article

Normativity of predictions: a new research perspective.

\nMicha&#x; Piekarski

  • Institute of Philosophy, Cardinal Stefan Wyszyński University in Warsaw, Warsaw, Poland

Introduction

One of the most interesting philosophical aspects of predictive processing (PP) is the normativity of predictive mechanisms and its function as a guide of action. In my opinion this framework provides us with good tools to describe and explain the phenomenon of normativity. It is possible to justify the thesis that explanations in the PP approach are normative in nature. They are like that because predictive mechanisms themselves are normative. By normative function of prediction I understand a feature of prediction which is constitutive ( Bickhard, 2003 ) for action control as well as for the structure and content of the world model that is internal to a given cognitive system. They are normative in the sense of possibly being wrong ( Bickhard, 2015a , b , 2016 ). Normative are also some properties of the environment. Both those factors are crucial for content and truth-value of representations. With no normativity, there is no error and it is hard to explain the possibility of misrepresentations. It means that predictions are also normative for action because they can be true (more probably in the Bayesian manner) or false (less probably in the Bayesian manner).

Predictive Processing Framework

In the PP framework ( Friston, 2010 ; Clark, 2013 , 2016 ; Hohwy, 2013 ), the main function of the brain (understood as a multilevel, hierarchical, and generative model) is to minimize prediction errors, i.e., any potential discrepancies between information from sensory input and expectations related to the source and nature of such information. This function is of key importance for the organism because, according to the PP framework, all perception serves the aim of ensuring that the organism operates efficiently in its environment: the brain keeps creating statistical predictions of what happens in the world. It predicts the current and future forms of information reaching the brain through sensory modalities. The predictions are hierarchically arranged and created at individual levels of the model. Thus, estimates made at different hierarchical levels relate to predictions present at other levels. More precisely, predictions impose a top-down structure on the bottom-up flow of information coming from the senses. Prediction errors are used by the model (at each level) to correct its current estimate of the input signal and generate the next prediction. The aim of low-level predictions is to clarify the spatial and temporal dimensions of incoming information. Predictions at higher levels of the model are more abstract. This framework suggests that the brain copes with making predictions by continuously estimating and re-estimating its own uncertainty ( Clark, 2016 , p. 57). What does it mean?

Estimations of uncertainty alter the impact of prediction errors. This function is directly related to so-called attention, i.e., a means of balancing the relations between top-down and bottom-up influences via precision, which is a measure of their estimated certainty. The greater the precision, the lesser the uncertainty ( Friston, 2010 ). In this approach, “uncertainty” means that a given piece of information may be described through probability distribution. Here, the best possible prediction is made by applying Bayes' Rule which describes the probability of a given hypothesis (prediction) based on the brain's prior knowledge of conditions that might be related to the incoming sensory signal (e.g., Hohwy, 2013 ; Harkness and Keshava, 2017 ). The Bayesian approach offers a rational solution to the problem of how the brain updates the generative model (together with the internal model of the world) on incoming sensory signals and the question of the hidden causes behind these signals.

Minimization of Free Energy

Minimization of free energy (in the informational-theoretical sense) consists in changing internal representations of the model in such a way as to approximate the posterior density of the causes of sensations ( Friston et al., 2010 ). This means that free energy is minimized when there is a change in predictions about the sources of statistical information obtained from sensory input. The change can be achieved by either (1) altering the properties of the model (changing adopted predictions)—perceptual inference; or (2) changing the environment through active inference, i.e., an action that modifies the state or causal structure of the world; thereby generating new sensory information. Perception reduces free energy by changing predictions, whereas action achieves the same by changing the information reaching the model. Therefore, the biological systems described by Friston should be interpreted in terms of active agents who minimize prediction errors with the use of a probabilistic generative model. It follows that the minimization of prediction errors serves a normative function in relation to the agent. First, it maintains a homeostatic balance; second, it obliges the agent to make predictions about the state of the world in order to learn the unknown parameters responsible for its motion by optimizing the statistical information coming from sensory input ( Friston et al., 2010 , p. 233).

This normative and abductive aspect of policy selection plays a key role for the interpretation of free energy minimization as approximate Bayesian inference or self-evidencing ( Hohwy, 2016 ). In other words, it speaks to the fact that uncertainty-reducing policies have to be selected via a process of Bayesian model selection. This in turn rests upon the capacity to entertain counterfactual hypotheses like “what would happen if I did that.” Hence, active inference and PP goes beyond homoeostasis and, possibly, becomes a purely personal inference ( Seth, 2015 ) 1 .

Normative Predictions

The issue of normativity is crucial for PP framework. Minimization of prediction errors directly implies “low-level” biological normativity. Friston connects it with the free energy principle (FEP) which suggests that all biological systems are driven to minimize information-theoretic “free energy,” which he understands as the difference between an organism's predictions about its sensory inputs and the sensations it actually encounters ( Friston, 2010 ; Friston et al., 2012a ). In this sense, FEP is a normative theory of action and perception, because it provides a well-defined objective function (variational free energy) that is optimized both by action and perception. The normative aspect of FEP is complemented with PP approach as a neuronally plausible implementation of this function ( Schwartenbeck et al., 2013 , p. 1). At higher levels of the model, normativity may be linked with (1) patterns of neural excitations based on predictions and (2) the role played by predictions in decision-making and action-control processes, among others, to minimize uncertainty in the environment 2 . The latter functionality is specifically important for our reflections and should be primarily related to active inference.

In Bickhard's opinion minimization of uncertainty alone is not enough to talk about normativity, because it is rather a “supposed consequence of the effect of prior evolutionary selection.” This means that the functioning of an organism is explained based on the existence of factual and casual conditions which minimize discrepancies between internally generated predictions and signals from sensory inputs ( Bickhard, 2016 , p. 264). In this sense, it is difficult to explain how and why an organism seeks value or avoids harm. It can be said that it does so for reasons of evolution or training, but this can easily be countered with the objection of the Dark Room Problem ( Friston et al., 2012b ; Sims, 2017 ; Klein, 2018 ). Friston et al. (2009) claims that highest level expectations are “built-in” to the organization of the whole organism. However, the adoption of such a hypothesis does not make it possible to finally explain the “normative” difference between successful action and some kind of error or “mistake,” because, from the point of view of the organization of the system, all such processes are just casual and factual.

In PP approach predictions and expectations are in some sense normative because of their key role in minimizing prediction errors. However, as Bickhard emphasizes, they only involve actual and causal processes. The key issue, therefore, is to differentiate relations between predictions and actions that are not only causal but also normative. Following Bickhard, it must be stated that to explain the normativity of functions we must demonstrate how this normativity emerges from the natural organization of the organism. By this I mean that it is necessary to refer not so much to the structure of a given system as to its actual interactions with the environment. Therefore, the normativity of prediction is less determined by its functional role in the generative model and the selection and management of actions than by its reference to relevant properties of the environment which, according to some researchers (e.g., Bruineberg and Rietveld, 2014 ; Bruineberg, 2017 ; Piekarski and Wachowski, 2018 ), is already structured. Due to the fact that the world is already “pre-structured,” it may present to the organism certain values of reward or punishment which cannot be reduced to log-evidence or negative surprise ( Friston et al., 2012a ).

Based on the hypothesis it has formulated, the cognitive system takes relevant action which is supposed to interfere with the causal structure of the world in a way that will make the hypothesis or prediction probable or true ( Clark, 2016 , p. 116). In this sense, a relevant prediction serves a specific normative function which should be understood in two ways: as primary normativity; and as normativity of mechanism.

The research conducted here hinges upon primary normativity. A pair of “prediction—active inference (action)” can be treated as a kind of conditional of the form “z If ‘prediction' (condition) then ‘action' (result).” The relationship between prediction and action, however, is not a typical causal relationship that we can write symbolically as “ If A, then B,” but the motivational relation “ If A, then B, C or D, but not E, F or G.” This still raises the following question: “why is dependence on normative predictions normative (functional) but not causal?” The full answer to this question will only be possible if we bear in mind that this dependence is constitutive for the interaction between an organism and its environment, which means that, on the one hand, it cannot be reduced to the structure of the organism or a cognitive system “armed” with a generative model and simple mechanisms of reinforcement or unsupervised learning ( Friston et al., 2009 ; Korbak, in preparation), and on the other hand, more importantly, it allows to make an error that will be significant from the point of view of the organism and not only the external observer assessing it ( Bickhard, 2016 , p. 263). In other words, predictions are normative because they refer not only to the need to minimize prediction errors or uncertainty, but also to the individual beliefs 3 or motivations that arise in the face of specific possibilities for action (affordances) that the environment offers to an individual organism. From this perspective, a possible error or wrong representation is of normative importance to the organism and not merely a potential result of specific causal processes. Normative involvement of predictions is also constituted by how they shape the causal transitions between contentful states and structured environment [in such a way that they accord to a normative Bayesian rule ( Shams et al., 2005 ; Kiefer, 2017 )].

For example: if I predict it will rain, then this prediction obliges me ( Friston, 2010 , p. 233) to take some action (which is not entirely arbitrary but determined by the nature of the prediction): I can stay at home, order a taxi or take an umbrella, but the prediction does not necessarily determine activities such as going to bed or watching TV (i.e., it might be difficult to justify these actions by referring to the prediction that it will rain as their reason What I do depends also on my beliefs, desires, or goals, which are relativized and conditioned by the specific properties of the world. In this sense, predictions should be considered as normative. It means that actions are selected based on some conditional potentiality and relations. “Such conditional relationships can branch—a single interaction outcome can function to indicate multiple further interactive potentialities—and they can iterate—completion of interaction A may indicate the potentiality of B , which, if completed, would indicate the potentiality of C , and so on” ( Bickhard, 2009 , p. 78) 4 .

It is important to add that the cognitive system still predicts the form of sensory signals via active inferences (actions). Those depend on normative predictions which are at the same time verified by active inference. The dependence is not causal but functional (or normative, in my terms). How effective active inference is in minimizing prediction errors hangs on the selection of predictions and internal parameters of the model. The adaptive and cognitive success of an organism is the product of normative predictive mechanisms related to some aspects and features of the environment.

The normativity of prediction can be additionally justified as the normativity of a mechanism: a given mechanism is normative because it fulfills the conditions that must be met in order for a given action (cognitive or non-cognitive) to be affected. In other words, the statement that a given mechanism is normative simply means that it is possible for a given system X to be a mechanism for activity Y even though (e.g., at a given moment) X cannot perform Y ( Garson, 2013 ). This means that predictions are primarily normative as well as embodied in a mechanism that is itself normative.

In my opinion, the PP approach offers a brand-new framework for investigations into the problem of normativity. This possibility has been ignored in the current literature, but it is a legitimate object of further investigation. Further theoretical research is, therefore, warranted to determine the extent and nature of the interaction between predictive mechanisms and their normative functions.

Author Contributions

MP reviewed the literature, developed the theoretical stance, wrote the manuscript and prepared to publication.

Work on this paper was financed by the Polish National Science Center (Narodowe Centrum Nauki) MINIATURA Grant, under the decision DEC-2017/01/X/HS1/00165.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I would like to thank Joanna Rączaszek-Leonardi and two reviewers for all helpful comments.

1. ^ In these considerations I take the perspective of those researchers who claim that Friston's analyses are to some extent complementary with the PP approach ( Bruineberg and Rietveld, 2014 ; Hohwy, 2015 ; Seth, 2015 ; Bickhard, 2016 ; Kirchhoff et al., 2018 ).

2. ^ Technically, the minimization of uncertainty corresponds to minimizing expected free energy through action or policy selection. In terms of information theory, uncertainty is the divergence between the predicted and preferred (sensory) outcomes ( Friston et al., 2015 ).

3. ^ I use the notions of belief in the Bayesian manner as a probability distribution over some unknown state or attribute of the world. In this sense belief is a systemic prior with a high degree of abstraction, i.e. a high-level prediction concerning general knowledge about the world.

4. ^ It is important to refer here to the already mentioned concept of counterfactual hypotheses. The choice between these hypotheses is directly related to Bayesian inference to the best explanation and is crucially important for social cognition ( Palmer et al., 2015 ).

Bickhard, M. H. (2003). “Process and emergence: normative function and representation,” in Process Theories. Crossdisciplinary Studies in Dynamic Categories , ed J. Seibt (Dordrecht: Springer), 121–155. doi: 10.1007/978-94-007-1044-3_6

CrossRef Full Text | Google Scholar

Bickhard, M. H. (2009). The biological foundations of cognitive science. N. Ideas Psychol. 27, 75–84. doi: 10.1016/j.newideapsych.2008.04.001

Bickhard, M. H. (2015a). Toward a model of functional brain processes I: central nervous system functional micro-architecture. Axiomathes 25, 217–238. doi: 10.1007/s10516-015-9275-x

Bickhard, M. H. (2015b). Toward a model of functional brain processes II: central nervous system functional macro-architecture. Axiomathes 25, 377–407. doi: 10.1007/s10516-015-9276-9

Bickhard, M. H. (2016). “The anticipatory brain: two approaches,” in Fundamental Issues of Artificial Intelligence , ed V. C. Müller (Berlin: Springer), 259–281. doi: 10.1007/978-3-319-26485-1_16

Bruineberg, J. (2017). “Active inference and the primacy of the ‘I Can',” in Philosophy and Predictive Processing:5 , eds T. Metzinger and W. Wiese (Frankfurt am Main: MIND Group), 1–18.

Google Scholar

Bruineberg, J., and Rietveld, E. (2014). Self-organization, free energy minimization, and optimal grip on a field of affordances. Front. Hum. Neurosci . 8:599. doi: 10.3389/fnhum.2014.00599

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. doi: 10.1017/S0140525X12000477

Clark, A. (2016). Surfing Uncertainty. Prediction, Action and the Embodied Mind. Oxford: Oxford University Press.

Friston, K. (2010). The free-energy principle: a unified brain theory? Nat. Neurosci. 11, 127–138. doi: 10.1038/nrn2787

Friston, K., Thornton, C., and Clark, A. (2012b). Free-energy minimization and the dark-room problem. Front. Psychol. 3:130. doi: 10.3389/fpsyg.2012.00130

Friston, K. J., Adams, R., and Montague, R. (2012a). What is value— accumulated reward or evidence? Front. Neurorobot . 6:11. doi: 10.3389/fnbot.2012.00011

Friston, K. J., Daunizeau, J., and Kiebel, S. J. (2009). Reinforcement learning or active inference? PLoS ONE 4:e6421. doi: 10.1371/journal.pone.0006421

Friston, K. J., Daunizeau, J., Kilner, J., and Kiebel, S. J. (2010). Action and behavior: a free-energy formulation. Biol. Cybern . 102, 227–260. doi: 10.1007/s00422-010-0364-z

Friston, K. J., Rigoli, F., Ognibene, D., Mathys, C., Fitzgerald, T., and Pezzulo, G. (2015). Active inference and epistemic value. Cogn. Neurosci . 6, 187–214. doi: 10.1080/17588928.2015.1020053

Garson, J. (2013). Functional sense of mechanism. Philosophy. Sci. 80, 317–333. doi: 10.1086/671173

Harkness, D. L., and Keshava, A. (2017). “Moving from the What to the how and where – bayesian models and predictive processing,” in Philosophy and Predictive Processing:16 , eds T. Metzinger and W. Wiese (Frankfurt am Main: MIND Group), 1–10.

Hohwy, J. (2013). The Predictive Mind . Oxford: Oxford University Press.

Hohwy, J. (2015). “The neural organ explains the mind,” in Open MIND: 19 , eds T. Metzinger and J. M. Windt (Frankfurt am Main: MIND Group).

Hohwy, J. (2016). The self-evidencing brain. Noûs 50, 259–285. doi: 10.1111/nous.12062

Kiefer, A. (2017). “Literal Perceptual Inference,” in Philosophy and Predictive Processing: 17 , eds T. Metzinger and W. Wiese (Frankfurt am Main: MIND Group), 1–19.

Kirchhoff, M., Parr, T., Palacios, E., Friston, K., and Kiverstein, J. (2018). The Markov blankets of life: autonomy, active inference and the free energy principle. J. R. Soc. Interface 15:20170792. doi: 10.1098/rsif.2017.0792

Klein, C. (2018). What do predictive coders want? Synthese 195, 2541–2557. doi: 10.1007/s11229-016-1250-6

Palmer, C. J., Seth, A. K., and Hohwy, J. (2015). The felt presence of other minds: predictive processing, counterfactual predictions, and mentalising in autism. Conscious Cogn . 36, 376–389. doi: 10.1016/j.concog.2015.04.007

Piekarski, M., and Wachowski, W. (2018). Artefacts as social things: design-based approach to normativity. Techné 22, 400–424. doi: 10.5840/techne2018121990

Schwartenbeck, P., FitzGerald, T., Dolan, R. J., and Friston, K. (2013). Exploration, novelty, surprise, and free energy minimization. Front. Psychol . 4:710. doi: 10.3389/fpsyg.2013.00710

Seth, A. K. (2015). “Inference to the best prediction,” in: Open MIND , eds T. Metzinger and J. M. Windt (Frankfurt am Main: MIND Group).

Shams, L., Ma, W. J., and Beierholm, U. (2005). Sound-induced flash illusion as an optimal percept. Neuroreport 16, 1923–1927. doi: 10.1097/01.wnr.0000187634.68504.bb

Sims, A. (2017). “The Problems with Prediction - The Dark Room Problem and the Scope Dispute,” in Philosophy and Predictive Processing:23 , eds T. Metzinger and W. Wiese (Frankfurt am Main: MIND Group).

Keywords: predictive processing, normativity, active inference, uncertainty, mechanism, environment, content, casuality

Citation: Piekarski M (2019) Normativity of Predictions: A New Research Perspective. Front. Psychol. 10:1710. doi: 10.3389/fpsyg.2019.01710

Received: 25 February 2019; Accepted: 09 July 2019; Published: 23 July 2019.

Reviewed by:

Copyright © 2019 Piekarski. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Michał Piekarski, m.a.piekarski@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Normative Theories of Rational Choice: Expected Utility

We must often make decisions under conditions of uncertainty. Pursuing a degree in biology may lead to lucrative employment, or to unemployment and crushing debt. A doctor’s appointment may result in the early detection and treatment of a disease, or it may be a waste of money. Expected utility theory is an account of how to choose rationally when you are not sure which outcome will result from your acts. Its basic slogan is: choose the act with the highest expected utility.

This article discusses expected utility theory as a normative theory—that is, a theory of how people should make decisions. In classical economics, expected utility theory is often used as a descriptive theory—that is, a theory of how people do make decisions—or as a predictive theory—that is, a theory that, while it may not accurately model the psychological mechanisms of decision-making, correctly predicts people’s choices. Expected utility theory makes faulty predictions about people’s decisions in many real-life choice situations (see Kahneman & Tversky 1982); however, this does not settle whether people should make decisions on the basis of expected utility considerations.

The expected utility of an act is a weighted average of the utilities of each of its possible outcomes, where the utility of an outcome measures the extent to which that outcome is preferred, or preferable, to the alternatives. The utility of each outcome is weighted according to the probability that the act will lead to that outcome. Section 1 fleshes out this basic definition of expected utility in more rigorous terms, and discusses its relationship to choice. Section 2 discusses two types of arguments for expected utility theory: representation theorems, and long-run statistical arguments. Section 3 considers objections to expected utility theory; section 4 discusses its applications in philosophy of religion, economics, ethics, and epistemology.

1.1 Conditional Probabilities

1.2 outcome utilities, 2.1 long-run arguments, 2.2 representation theorems, 3.1 maximizing expected utility is impossible, 3.2 maximizing expected utility is irrational, 4.1 economics and public policy, 4.3 epistemology, other internet resources, related entries, 1. defining expected utility.

The concept of expected utility is best illustrated by example. Suppose I am planning a long walk, and need to decide whether to bring my umbrella. I would rather not tote the umbrella on a sunny day, but I would rather face rain with the umbrella than without it. There are two acts available to me: taking my umbrella, and leaving it at home. Which of these acts should I choose?

This informal problem description can be recast, slightly more formally, in terms of three sorts of entities. First, there are outcomes —objects of non-instrumental preferences. In the example, we might distinguish three outcomes: either I end up dry and unencumbered; I end up dry and encumbered by an unwieldy umbrella; or I end up wet. Second, there are states —things outside the decision-maker’s control which influence the outcome of the decision. In the example, there are two states: either it is raining, or it is not. Finally, there are acts —objects of the decision-maker’s instrumental preferences, and in some sense, things that she can do. In the example, there are two acts: I may either bring the umbrella; or leave it at home. Expected utility theory provides a way of ranking the acts according to how choiceworthy they are: the higher the expected utility, the better it is to choose the act. (It is therefore best to choose the act with the highest expected utility—or one of them, in the event that several acts are tied.)

Following general convention, I will make the following assumptions about the relationships between acts, states, and outcomes.

  • States, acts, and outcomes are propositions, i.e., sets of possibilities. There is a maximal set of possibilities, \(\Omega\), of which each state, act, or outcome is a subset.
  • The set of acts, the set of states, and the set of outcomes are all partitions on \(\Omega\). In other words, acts and states are individuated so that every possibility in \(\Omega\) is one where exactly one state obtains, the agent performs exactly one act, and exactly one outcome ensues.
  • Acts and states are logically independent, so that no state rules out the performance of any act.
  • I will assume for the moment that, given a state of the world, each act has exactly one possible outcome. (Section 1.1 briefly discusses how one might weaken this assumption.)

So the example of the umbrella can be depicted in the following matrix, where each column corresponds to a state of the world; each row corresponds to an act; and each entry corresponds to the outcome that results when the act is performed in the state of the world.

encumbered, dry encumbered, dry
wet free, dry

Having set up the basic framework, I can now rigorously define expected utility. The expected utility of an act \(A\) (for instance, taking my umbrella) depends on two features of the problem:

  • The value of each outcome, measured by a real number called a utility .
  • The probability of each outcome conditional on \(A\).

Given these three pieces of information, \(A\)’s expected utility is defined as:

where \(O\) is is the set of outcomes, \(P_{A}(o)\) is the probability of outcome \(o\) conditional on \(A\), and \(U(o)\) is the utility of \(o\).

The next two subsections will unpack the conditional probability function \(P_A\) and the utility function \(U\).

The term \(P_{A}(o)\) represents the probability of \(o\) given \(A\)—roughly, how likely it is that outcome \(o\) will occur, on the supposition that the agent chooses act \(A\). (For the axioms of probability, see the entry on interpretations of probability .) To understand what this means, we must answer two questions. First, which interpretation of probability is appropriate? And second, what does it mean to assign a probability on the supposition that the agent chooses act \(A\)?

Expected utility theorists often interpret probability as measuring individual degree of belief, so that a proposition \(E\) is likely (for an agent) to the extent that that agent is confident of \(E\) (see, for instance, Ramsey 1926, Savage 1972, Jeffrey 1983). But nothing in the formalism of expected utility theory forces this interpretation on us. We could instead interpret probabilities as objective chances (as in von Neumann and Morgenstern 1944), or as the degrees of belief that are warranted by the evidence, if we thought these were a better guide to rational action. (See the entry on interpretations of probability for discussion of these and other options.)

What is it to have a probability on the supposition that the agent chooses \(A\)? Here, there are two basic types of answer, corresponding to evidential decision theory and causal decision theory.

According to evidential decision theory, endorsed by Jeffrey (1983), the relevant suppositional probability \(P_{A}(o)\) is the conditional probability \(P(o \mid A)\), defined as the ratio of two unconditional probabilities: \(P(A \amp o) / P(A)\).

Against Jeffrey’s definition of expected utility, Spohn (1977) and Levi (1991) object that a decision-maker should not assign probabilities to the very acts under deliberation: when freely deciding whether to perform an act \(A\), you shouldn’t take into account your beliefs about whether you will perform \(A\). If Spohn and Levi are right, then Jeffrey’s ratio is undefined (since its denominator is undefined).

Nozick (1969) raises another objection: Jeffrey’s definition gives strange results in the Newcomb Problem . A predictor hands you a closed box, containing either $0 or $1 million, and offers you an open box, containing an additional $1,000. You can either refuse the open box (“one-box”) or take the open box (“two-box”). But there’s a catch: the predictor has predicted your choice beforehand, and all her predictions are 90% accurate. In other words, the probability that you one-box, given that she predicts you one-box, is 90%, and the probability that you two-box, given that she predicts you two-box, is 90%. Finally, the contents of the closed box depend on the prediction: if the predictor thought you would two-box, she put nothing in the closed box, while if she thought you would one-box, she put $1 million in the closed box. The matrix for your decision looks like this:

$1,000,000 $0
$1,001,000 $1,000

Two-boxing dominates one-boxing: in every state, two-boxing yields a better outcome. Yet on Jeffrey’s definition of conditional probability, one-boxing has a higher expected utility than two-boxing. There is a high conditional probability of finding $1 million is in the closed box, given that you one-box, so one-boxing has a high expected utility. Likewise, there is a high conditional probability of finding nothing in the closed box, given that you two-box, so two-boxing has a low expected utility.

Causal decision theory is an alternative proposal that gets around these problems. It does not require (but still permits) acts to have probabilities, and it recommends two-boxing in the Newcomb problem.

Causal decision theory comes in many varieties, but I’ll consider a representative version proposed by Savage (1972), which calculates \(P_{A}(o)\) by summing the probabilities of states that, when combined with the act \(A\), lead to the outcome \(o\). Let \(f_{A,s}(o)\) be a of outcomes, which maps \(o\) to 1 if \(o\) results from performing \(A\) in state s , maps \(o\) to 0 otherwise. Then

On Savage’s proposal, two-boxing comes out with a higher expected utility than one-boxing. This result holds no matter which probabilities you assign to the states prior to your decision. Let \(x\) be the probability you assign to the state that the closed box contains $1 million. According to Savage, the expected utilities of one-boxing and two-boxing, respectively, are:

As long as the larger monetary amounts are assigned strictly larger utilities, the second sum (the utility of two-boxing) is guaranteed to be larger than the first (the utility of one-boxing).

Savage assumes that each act and state are enough to uniquely determine an outcome. But there are cases where this assumption breaks down. Suppose you offer to sell me the following gamble: you will toss a coin; if the coin lands heads, I win $100; and if the coin lands tails, I lose $100. But I refuse the gamble, and the coin is never tossed. There is no outcome that would have resulted, had the coin been tossed—I might have won $100, and I might have lost $100.

We can generalze Savage’s proposal by letting \(f_{A,s}\) be a probability function that maps outcomes to real numbers in the \([0, 1]\) interval. Lewis (1981), Skyrms (1980), and Sobel (1994) equate \(f_{A,s}\) with the objective chance that \(o\) would be the outcome if state \(s\) obtained and the agent chose action \(A\).

In some cases—most famously the Newcomb problem—the Jeffrey definition and the Savage definition of expected utility come apart. But whenever the following two conditions are satisfied, they agree.

  • Acts are probabilistically independent of states. In formal terms, for all acts \(A\) and states \(s\), \[ P(s) = P(s \mid A) = \frac{P(s \amp A)}{P(A)}. \] (This is the condition that is violated in the Newcomb problem.)
  • For all outcomes \(o\), acts \(A\), and states \(s\), \(f_{A,s}(o)\) is equal to the conditional probability of \(o\) given \(A\) and \(s\); in formal terms, \[ f_{A,s}(o) = P(o \mid A \amp s) = \frac{P(o \amp A \amp s)}{P(A \amp s)}.\] (The need for this condition arises when acts and states fail to uniquely determine an outcome; see Lewis 1981.)

The term \(U(o)\) represents the utility of the outcome \(o\)—roughly, how valuable \(o\) is. Formally, \(U\) is a function that assigns a real number to each of the outcomes. (The units associated with \(U\) are typically called utiles , so that if \(U(o) = 2\), we say that \(o\) is worth 2 utiles.) The greater the utility, the more valuable the outcome.

What kind of value is measured in utiles? Utiles are typically not taken to be units of currency, like dollars, pounds, or yen. Bernoulli (1738) argued that money and other goods have diminishing marginal utility: as an agent gets richer, every successive dollar (or gold watch, or apple) is less valuable to her than the last. He gives the following example: It makes rational sense for a rich man, but not for a pauper, to pay 9,000 ducats in exchange for a lottery ticket that yields a 50% chance at 20,000 ducats and a 50% chance at nothing. Since the lottery gives the two men the same chance at each monetary prize, the prizes must have different values depending on whether the player is poor or rich.

Classic utilitarians such as Bentham (1789), Mill (1861), and Sidgwick (1907) interpreted utility as a measure of pleasure or happiness. For these authors, to say \(A\) has greater utility than \(B\) (for an agent or a group of agents) is to say that \(A\) results in more pleasure or happiness than \(B\) (for that agent or group of agents).

One objection to this interpretation of utility is that there may not be a single good (or indeed any good) which rationality requires us to seek. But if we understand “utility” broadly enough to include all potentially desirable ends—pleasure, knowledge, friendship, health and so on—it’s not clear that there is a unique correct way to make the tradeoffs between different goods so that each outcome receives a utility. There may be no good answer to the question of whether the life of an ascetic monk contains more or less good than the life of a happy libertine—but assigning utilities to these options forces us to compare them.

Contemporary decision theorists typically interpret utility as a measure of preference, so that to say that \(A\) has greater utility than \(B\) (for an agent) is simply to say that the agent prefers \(A\) to \(B\). It is crucial to this approach that preferences hold not just between outcomes (such as amounts of pleasure, or combinations of pleasure and knowledge), but also between uncertain prospects (such as a lottery that pays $1 million dollars if a particular coin lands heads, and results in an hour of painful electric shocks if the coin lands tails). Section 2 of this article addresses the formal relationship between preference and choice in detail.

Expected utility theory does not require that preferences be selfish or self-interested. Someone can prefer giving money to charity over spending the money on lavish dinners, or prefer sacrificing his own life over allowing his child to die. Sen (1977) suggests that each person’s psychology is best represented using three rankings: one representing the person’s narrow self-interest, a second representing the person’s self-interest construed more broadly to account for feelings of sympathy (e.g., suffering when watching another person suffer), and a third representing the person’s commitments, which may require her to act against her self-interest broadly construed.

Broome (1991, Ch. 6) interprets utilities as measuring comparisons of objective betterness and worseness, rather than personal preferences: to say that \(A\) has a greater utility than \(B\) is to say that \(A\) is objectively better than \(B\), or that a rational person would prefer \(A\) to \(B\). Just as there is nothing in the formalism of probability theory that requires us to use subjective rather than objective probabilities, so there is nothing in the formalism of expected utility theory that requires us to use subjective rather than objective values.

Those who interpret utilities in terms of personal preference face a special challenge: the so-called problem of interpersonal utility comparisons . When making decisions about how to distribute shared resources, we often want to know if our acts would make Alice better off than Bob—and if so, how much better off. But if utility is a measure of individual preference, there is no clear, meaningful way of making these comparisons. Alice’s utilities are constituted by Alice’s preferences, Bob’s utilities are constituted by Bob’s preferences, and there are no preferences spanning Alice and Bob. We can’t assume that Alice’s utility 10 is equivalent to Bob’s utility 10, any more than we can assume that getting an A grade in differential equations is equivalent to getting an A grade in basket weaving.

Now is a good time to consider which features of the utility function carry meaningful information. Comparisons are informative: if \(U(o_1) \gt U(o_2)\) (for a person), then \(o_1\) is better than (or preferred to) \(o_2\). But it is not only comparisons that are informative—the utility function must carry other information, if expected utility theory is to give meaningful results.

To see why, consider the umbrella example again. This time, I’ve filled in a probability for each state, and a utility for each outcome.

\((P = 0.6)\) \((P = 0.4)\)
encumbered, dry \((U = 5)\) encumbered, dry \((U = 5)\)
wet \((U = 0)\) free, dry \((U =10)\)

The expected utility of taking the umbrella is

while the expected utility of leaving the umbrella is

Since \(EU(\take) \gt EU(\leave)\), expected utility theory tells me that taking the umbrella is better than leaving it.

But now, suppose we change the utilities of the outcomes: instead of using \(U\), we use \(U'\).

\((P=0.6)\) \((P=0.4)\)
encumbered, dry \((U'=4)\) encumbered, dry \((U'=4)\)
wet \((U'=2)\) free, dry \((U'=8)\)

The new expected utility of taking the umbrella is

while the new expected utility of leaving the umbrella is

Since \(EU'(\take) \lt EU'(\leave)\), expected utility theory tells me that leaving the umbrella is better than taking it.

The utility functions \(U\) and \(U'\) rank the outcomes in exactly the same way: free, dry is best; encumbered, dry ranks in the middle; and wet is worst. Yet expected utility theory gives different advice in the two versions of the problem. So there must be some substantive difference between preferences appropriately described by \(U\), and preferences appropriately described by \(U'\). Otherwise, expected utility theory is fickle, and liable to change its advice when fed different descriptions of the same problem.

When do two utility functions represent the same basic state of affairs? Measurement theory answers the question by characterizing the allowable transformations of a utility function—ways of changing it that leave all of its meaningful features intact. If we characterize the allowable transformations of a utility function, we have thereby specified which of its features are meaningful.

Defenders of expected utility theory typically require that utility be measured by a linear scale , where the allowable transformations are all and only the positive linear transformations, i.e., functions \(f\) of the form

for real numbers \(x \gt 0\) and \(y\).

Positive linear transformations of outcome utilities will never affect the verdicts of expected utility theory: if \(A\) has greater expected utility than \(B\) where utility is measured by function \(U\), then \(A\) will also have greater expected utility than \(B\) where utility is measured by any positive linear transformation of \(U\).

2. Arguments for Expected Utility Theory

Why choose acts that maximize expected utility? One possible answer is that expected utility theory is rational bedrock—that means-end rationality essentially involves maximizing expected utility. For those who find this answer unsatisfying, however, there are two further sources of justification. First, there are long-run arguments, which rely on evidence that expected-utility maximization is a profitable policy in the long term. Second, there are arguments based on representation theorems, which suggest that certain rational constraints on preference entail that all rational agents maximize expected utility.

One reason for maximizing expected utility is that it makes for good policy in the long run. Feller (1968) gives a version of this argument. He relies on two mathematical facts about probabilities: the strong and weak laws of large numbers . Both these facts concern sequences of independent, identically distributed trials—the sort of setup that results from repeatedly betting the same way on a sequence of roulette spins or craps games. Both the weak and strong laws of large numbers say, roughly, that over the long run, the average amount of utility gained per trial is overwhelmingly likely to be close to the expected value of an individual trial.

The weak law of large numbers states that where each trial has an expected value of \(\mu\), for any arbitrarily small real numbers \(\epsilon \gt 0\) and \(\delta \gt 0\), there is some finite number of trials \(n\), such that for all \(m\) greater than or equal to \(n\), with probability at least \(1-\delta\), the gambler’s average gains for the first \(m\) trials will fall within \(\epsilon\) of \(\mu\). In other words, in a long run of similar gamble, the average gain per trial is highly likely to become arbitrarily close to the gamble’s expected value within a finite amount of time. So in the finite long run, the average value associated with a gamble is overwhelmingly likely to be close to its expected value.

The strong law of large numbers states that where each trial has an expected value of \(\mu\), with probability 1, for any arbitrarily small real number \(\epsilon \gt 0\),as the number of trials increases, the gambler’s average winnings per trial will fall within \(\epsilon\) of \(\mu\). In other words, as the number of repetitions of a gamble approaches infinity, the average gain per trial will become arbitrarily close to the gamble’s expected value with probability 1. So in the long run, the average value associated with a gamble is virtually certain to equal its expected value.

There are several objections to these long run arguments. First, many decisions cannot be repeated over indefinitely many similar trials. Decisions about which career to pursue, whom to marry, and where to live, for instance, are made at best a small finite number of times. Furthermore, where these decisions are made more than once, different trials involve different possible outcomes, with different probabilities. It is not clear why long-run considerations about repeated gambles should bear on these single-case choices.

Second, the argument relies on two independence assumptions, one or both of which may fail. One assumption holds that the probabilities of the different trials are independent. This is true of casino gambles, but not true of other choices where we wish to use decision theory—e.g., choices about medical treatment. My remaining sick after one course of antibiotics makes it more likely I will remain sick after the next course, since it increases the chance that antibiotic-resistant bacteria will spread through my body. The argument also requires that the utilities of different trials be independent, so that winning a prize on one trial makes the same contribution to the decision-maker’s overall utility no matter what she wins on other trials. But this assumption is violated in many real-world cases. Due to the diminishing marginal utility of money, winning $10 million on ten games of roulette is not worth ten times as much as winning $1 million on one game of roulette.

A third problem is that the strong and weak laws of large numbers are modally weak. Neither law entails that if a gamble were repeated indefinitely (under the appropriate assumptions), the average utility gain per trial would be close to the game’s expected utility. They establish only that the average utility gain per trial would with high probability be close to the game’s expected utility. But high probability—even probability 1—is not certainty. (Standard probability theory rejects Cournot’s Principle , which says events with low or zero probability will not happen. But see Shafer (2005) for a defense of Cournot’s Principle.) For any sequence of independent, identically distributed trials, it is possible for the average utility payoff per trial to diverge arbitrarily far from the expected utility of an individual trial.

A second type of argument for expected utility theory relies on so-called representation theorems. We follow Zynda’s (2000) formulation of this argument—slightly modified to reflect the role of utilities as well as probabilities. The argument has three premises:

The Rationality Condition. The axioms of expected utility theory are the axioms of rational preference.

Representability. If a person’s preferences obey the axioms of expected utility theory, then she can be represented as having degrees of belief that obey the laws of the probability calculus [and a utility function such that she prefers acts with higher expected utility].

The Reality Condition. If a person can be represented as having degrees of belief that obey the probability calculus [and a utility function such that she prefers acts with higher expected utility], then the person really has degrees of belief that obey the laws of the probability calculus [and really does prefer acts with higher expected utility].

These premises entail the following conclusion.

If a person [fails to prefer acts with higher expected utility], then that person violates at least one of the axioms of rational preference.

If the premises are true, the argument shows that there is something wrong with people whose preferences are at odds with expected utility theory—they violate the axioms of rational preference. Let us consider each of the premises in greater detail, beginning with the key premise, Representability.

A probability function and a utility function together represent a set of preferences just in case the following formula holds for all values of \(A\) and \(B\) in the domain of the preference relation

Mathematical proofs of Representability are called representation theorems . Section 2.1 surveys three of the most influential representation theorems, each of which relies on a different set of axioms.

No matter which set of axioms we use, the Rationality Condition is controversial. In some cases, preferences that seem rationally permissible—perhaps even rationally required—violate the axioms of expected utility theory. Section 3 discusses such cases in detail.

The Reality Condition is also controversial. Hampton (1994), Zynda (2000), and Meacham and Weisberg (2011) all point out that to be representable using a probability and utility function is not to have a probability and utility function. After all, an agent who can be represented as an expected utility maximizer with degrees of belief that obey the probability calculus, can also be represented as someone who fails to maximize expected utility with degrees of belief that violate the probability calculus. Why think the expected utility representation is the right one?

There are several options. Perhaps the defender of representation theorems can stipulate that what it is to have particular degrees of belief and utilities is just to have the corresponding preferences. The main challenge for defenders of this response is to explain why representations in terms of expected utility are explanatorily useful, and why they are better than alternative representations. Or perhaps probabilities and utilities are a good cleaned-up theoretical substitutes for our folk notions of belief and desire—precise scientific substitutes for our folk concepts. Meacham and Weisberg challenge this response, arguing that probabilities and utilities are poor stand-ins for our folk notions. A third possibility, suggested by Zynda, is that facts about degrees of belief are made true independently of the agent’s preferences, and provide a principled way to restrict the range of acceptable representations. The challenge for defenders of this type of response is to specify what these additional facts are.

I now turn to consider three influential representation theorems. These representation theorems differ from each other in three of philosophically significant ways.

First, different representation theorems disagree about the objects of preference and utility. Are they repeatable? Must they be wholly within the agent’s control

Second, representation theorems differ in their treatment of probability. They disagree about which entities have probabilities, and about whether the same objects can have both probabilities and utilities.

Third, while every representation theorem proves that for a suitable preference ordering, there exist a probability and utility function representing the preference ordering, they differ how unique this probability and utility function are. In other words, they differ as to which transformations of the probability and utility functions are allowable.

2.2.1 Ramsey

The idea of a representation theorem for expected utility dates back to Ramsey (1926). (His sketch of a representation theorem is subsequently filled in by Bradley (2004) and Elliott (2017).) Ramsey assumes that preferences are defined over a domain of gambles, which yield one prize on the condition that a proposition \(P\) is true, and a different prize on the condition that \(P\) is false. (Examples of gambles: you receive a onesie if you’re having a baby and a bottle of scotch otherwise; you receive twenty dollars if Bojack wins the Kentucky Derby and lose a dollar otherwise.)

Ramsey calls a proposition ethically neutral when “two possible worlds differing only in regard to [its truth] are always of equal value”. For an ethically neutral proposition, probability 1/2 can be defined in terms of preference: such a proposition has probability 1/2 just in case you are indifferent as to which side of it you bet on. (So if Bojack wins the Kentucky Derby is an ethically neutral proposition, it has probability 1/2 just in case you are indifferent between winning twenty dollars if it’s true and losing a dollar otherwise, and winning twenty dollars if it’s false and losing a dollar otherwise.)

By positing an ethically neutral proposition with probability 1/2, together with a rich space of prizes, Ramsey defines numerical utilities for prizes. (The rough idea is that if you are indifferent between receiving a middling prize \(m\) for certain, and a gamble that yields a better prize \(b\) if the ethically neutral proposition is true and a worse prize \(w\) if it is falls, then the utility of \(m\) is halfway between the utilities of \(b\) and \(w\).) Using these numerical utilities, he then exploits the definition of expected utility to define probabilities for all other propositions.

The rough idea is to exploit the richness of the space of prizes, which ensures that for any gamble \(g\) that yields better prize \(b\) if \(E\) is true and worse prize \(w\) if \(E\) is false, the agent is indifferent between \(g\) and some middling prize \(m\). This means that \(EU(g) = EU(m)\). Using some algebra, plus the fact that \(EU(g) = P(E)U(b) + (1-P(E))U(w)\), Ramsey shows that

2.2.2 Von Neumann and Morgenstern

Von Neumann and Morgenstern (1944) claim that preferences are defined over a domain of lotteries . Some of these lotteries are constant , and yield a single prize with certainty. (Prizes might include a banana, a million dollars, a million dollars’ worth of debt, death, or a new car.) Lotteries can also have other lotteries as prizes, so that one can have a lottery with a 40% chance of yielding a banana, and a 60% chance of yielding a 50-50 gamble between a million dollars and death.) The domain of lotteries is closed under a mixing operation, so that if \(L\) and \(L'\) are lotteries and \(x\) is a real number in the \([0, 1]\) interval, then there is a lottery \(x L + (1-x) L'\) that yields \(L\) with probability \(x\) and \(L'\) with probability \(1-x\). They show that every preference relation obeying certain axioms can be represented by the probabilities used to define the lotteries, together with a utility function which is unique up to positive linear transformation.

2.2.3 Savage

Instead of taking probabilities for granted, as von Neumann and Morgenstern do, Savage (1972) defines them in terms of preferences over acts. Savage posits three separate domains. Probability attaches to events , which we can think of as disjunctions of states, while utility and intrinsic preference attach to outcomes . Expected utility and non-intrinsic preference attach to acts .

For Savage, acts, states, and outcomes must satisfy certain constraints. Acts must be wholly under the agent’s control (so publishing my paper in Mind is not an act, since it depends partly on the editor’s decision, which I do not control). Outcomes must have the same utility regardless of which state obtains (so "I win a fancy car" is not an outcome, since the utility of the fancy car will be greater in states where the person I most want to impress wishes I had a fancy car, and less in states where I lose my driver’s license). No state can rule out the performance of any act, and an act and a state together must determine an outcome with certainty. For each outcome \(o\), there is a constant act which yields \(o\) in every state. (Thus, if world peace is an outcome, there is an act that results in world peace, no matter what the state of the world.) Finally, he assumes for any two acts \(A\) and \(B\) and any event \(E\), there is a mixed act \(A_E \amp B_{\sim E}\) that yields the same outcome as \(A\) if \(E\) is true, and the same outcome as \(B\) otherwise. (Thus, if world peace and the end of the world are both outcomes, then there is a mixed act that results in world peace if a certain coin lands heads, and the end of the world otherwise.)

Savage postulates a preference relation over acts, and gives axioms governing that preference relation. He then defines subjective probabilities, or degrees of belief, in terms of preferences. The key move is to define an “at least as likely as” relation between events; I paraphrase here.

Suppose \(A\) and \(B\) are constant acts such that \(A\) is preferred to \(B\). Then \(E\) is at least as likely as \(F\) just in case the agent either prefers \(A_E \amp B_{\sim E}\) (the act that yields \(A\) if \(E\) obtains, and \(B\) otherwise) to \(A_F \amp B_{\sim F}\) (the act that yields \(A\) if \(F\) obtains, and \(B\) otherwise), or else is indifferent between \(A_E \amp B_{\sim E}\) and \(A_F \amp B_{\sim F}\).

The thought behind the definition is that the agent considers \(E\) at least as likely as \(F\) just in case she would not rather bet on \(F\) than on \(E\)).

Savage then gives axioms constraining rational preference, and shows that any set of preferences satisfying those axioms yields an “at least as likely” relation that can be uniquely represented by a probability function. In other words, there is one and only one probability function \(P\) such that for all \(E\) and \(F\), \(P(E) \ge P(F)\) if and only if \(E\) is at least as likely as \(F\). Every preference relation obeying Savage’s axioms is represented by this probability function \(P\), together with a utility function which is unique up to positive linear transformation.

Savage’s representation theorem gives strong results: starting with a preference ordering alone, we can find a single probability function, and a narrow class of utility functions, which represent that preference ordering. The downside, however, is that Savage has to build implausibly strong assumptions about the domain of acts.

Luce and Suppes (1965) point out that Savage’s constant acts are implausible. (Recall that constant acts yield the same outcome and the same amount of value in every state.) Take some very good outcome—total bliss for everyone. Is there really a constant act that has this outcome in every possible state, including states where the human race is wiped out by a meteor? Savage’s reliance on a rich space of mixed acts is also problematic. Savage has had to assume that any two outcomes and any event, there is a mixed act that yields the first outcome if the event occurs, and the second outcome otherwise? Is there really an act that yields total bliss if everyone is killed by an antibiotic-resistant plague, and total misery otherwise? Luce and Krantz (1971) suggest ways of reformulating Savage’s representation theorem that weaken these assumptions, but Joyce (1999) argues that even on the weakened assumptions, the domain of acts remains implausibly rich.

2.2.4 Bolker and Jeffrey

Bolker (1966) proves a general representation theorem about mathematical expectations, which Jeffrey (1983) uses as the basis for a philosophical account of expected utility theory. Bolker’s theorem assumes a single domain of propositions, which are objects of preference, utility, and probability alike. Thus, the proposition that it will rain today has a utility, as well as a probability. Jeffrey interprets this utility as the proposition’s news value —a measure of how happy or disappointed I would be to learn that the proposition was true. By convention, he sets the value of the necessary proposition at 0—the necessary proposition is no news at all! Likewise, the proposition that I take my umbrella to work, which is an act, has a probability as well as a utility. Jeffrey interprets this to mean that I have degrees of belief about what I will do.

Bolker gives axioms constraining preference, and shows that any preferences satisfying his axioms can be represented by a probability measure \(P\) and a utility measure \(U\). However, Bolker’s axioms do not ensure that \(P\) is unique, or that \(U\) is unique up to positive linear transformation. Nor do they allow us to define comparative probability in terms of preference. Instead, where \(P\) and \(U\) jointly represent a preference ordering, Bolker shows that the pair \(\langle P, U \rangle\) is unique up to a fractional linear transformation.

In technical terms, where \(U\) is a utility function normalized so that \(U(\Omega) = 0\), \(inf\) is the greatest lower bound of the values assigned by \(U\), \(sup\) is the least upper bound of the values assigned by by \(U\), and \(\lambda\) is a parameter falling between \(-1/inf\) and \(-1/sup\), the fractional linear transformation \(\langle P_{\lambda}, U_{\lambda} \rangle\) of \(\langle P, U \rangle\) corresponding to \(\lambda\) is given by:

Notice that fractional linear transformations of a probability-utility pair can disagree with the original pair about which propositions are likelier than which others.

Joyce (1999) shows that with additional resources, Bolker’s theorem can be modified to pin down a unique \(P\), and a \(U\) that is unique up to positive linear transformation. We need only supplement the preference ordering with a primitive “more likely than” relation, governed by its own set of axioms, and linked to belief by several additional axioms. Joyce modifies Bolker’s result to show that given these additional axioms, the “more likely than” relation is represented by a unique \(P\), and the preference ordering is represented by \(P\) together with a utility function that is unique up to positive linear transformation.

2.2.5 Summary

Together, these four representation theorems above can be summed up in the following table.

Ramsey gambles preference → utility → probability identity positive linear
von Neumann/
Morgenstern
lotteries (preference & probability) → utility N/A positive linear
Savage acts preference → probability → utility identity positive linear
Jeffrey/Bolker propositions preference → (probability & utility) — fractional linear —

Notice that the order of construction differs between theorems: Ramsey constructs a representation of probability using utility, while von Neumann and Morgenstern begin with probabilities and construct a representation of utility. Thus, although the arrows represent a mathematical relationship of representation, they cannot represent a metaphysical relationship of grounding. The Reality Condition needs to be justified independently of any representation theorem.

Suitably structured ordinal probabilities (the relations picked out by “at least as likely as”, “more likely than”, and “equally likely”) stand in one-to-one correspondence with the cardinal probability functions. Finally, the grey line from preferences to ordinal probabilities indicates that every probability function satisfying Savage’s axioms is represented by a unique cardinal probability—but this result does not hold for Jeffrey’s axioms.

Notice that it is often possible to follow the arrows in circles—from preference to ordinal probability, from ordinal probability to cardinal probability, from cardinal probability and preference to expected utility, and from expected utility back to preference. Thus, although the arrows represent a mathematical relationship of representation, they do not represent a metaphysical relationship of grounding. This fact drives home the importance of independently justifying the Reality Condition—representation theorems cannot justify expected utility theory without additional assumptions.

3. Objections to Expected Utility Theory

Ought implies can, but is it humanly possible to maximize expected utility? March and Simon (1958) point out that in order to compute expected utilities, an agent needs a dauntingly complex understanding of the available acts, the possible outcomes, and the values of those outcomes, and that choosing the best act is much more demanding than choosing an act that is merely good enough. Similar points appear in Lindblom (1959), Feldman (2006), and Smith (2010).

McGee (1991) argues that maximizing expected utility is not mathematically possible even for an ideal computer with limitless memory. In order to maximize expected utility, we would have to accept any bet we were offered on the truths of arithmetic, and reject any bet we were offered on false sentences in the language of arithmetic. But arithmetic is undecidable, so no Turing machine can determine whether a given arithmetical sentence is true or false.

One response to these difficulties is the bounded rationality approach, which aims to replace expected utility theory with some more tractable rules. Another is to argue that the demands of expected utility theory are more tractable than they appear (Burch-Brown 2014; see also Greaves 2016), or that the relevant “ought implies can” principle is false (Srinivasan 2015).

A variety of authors have given examples in which expected utility theory seems to give the wrong prescriptions. Sections 3.2.1 and 3.2.2 discuss examples where rationality seems to permit preferences inconsistent with expected utility theory. These examples suggest that maximizing expected utility is not necessary for rationality. Section 3.2.3 discusses examples where expected utility theory permits preferences that seem irrational. These examples suggest that maximizing expected utility is not sufficient for rationality. Section 3.2.4 discusses an example where expected utility theory requires preferences that seem rationally forbidden—a challenge to both the necessity and the sufficiency of expected utility for rationality.

3.2.1 Counterexamples Involving Transitivity and Completeness

Expected utility theory implies that the structure of preferences mirrors the structure of the greater-than relation between real numbers. Thus, according to expected utility theory, preferences must be transitive : If \(A\) is preferred to \(B\) (so that \(U(A) \gt U(B)\)), and \(B\) is preferred to \(C\) (so that \(U(B) \gt U(C)\)), then \(A\) must be preferred to \(C\) (since it must be that \(U(A) \gt U(C)\)). Likewise, preferences must be complete : for any two options, either one must be preferred to the other, or the agent must be indifferent between them (since of their two utilities, either one must be greater or the two must be equal). But there are cases where rationality seems to permit (or perhaps even require) failures of transitivity and failures of completeness.

An example of preferences that are not transitive, but nonetheless seem rationally permissible, is Quinn’s puzzle of the self-torturer (1990). The self-torturer is hooked up to a machine with a dial with settings labeled 0 to 1,000, where setting 0 does nothing, and each successive setting delivers a slightly more powerful electric shock. Setting 0 is painless, while setting 1,000 causes excruciating agony, but the difference between any two adjacent settings is so small as to be imperceptible. The dial is fitted with a ratchet, so that it can be turned up but never down. Suppose that at each setting, the self-torturer is offered $10,000 to move up to the next, so that for tolerating setting \(n\), he receives a payoff of \(n {\cdot} {$10,000}\). It is permissible for the self-torturer to prefer setting \(n+1\) to setting \(n\) for each \(n\) between 0 and 999 (since the difference in pain is imperceptible, while the difference in monetary payoffs is significant), but not to prefer setting 1,000 to setting 0 (since the pain of setting 1,000 may be so unbearable that no amount of money will make up for it.

It also seems rationally permissible to have incomplete preferences. For some pairs of actions, an agent may have no considered view about which she prefers. Consider Jane, an electrician who has never given much thought to becoming a professional singer or a professional astronaut. (Perhaps both of these options are infeasible, or perhaps she considers both of them much worse than her steady job as an electrician). It is false that Jane prefers becoming a singer to becoming an astronaut, and it is false that she prefers becoming an astronaut to becoming a singer. But it is also false that she is indifferent between becoming a singer and becoming an astronaut. She prefers becoming a singer and receiving a $100 bonus to becoming a singer, and if she were indifferent between becoming a singer and becoming an astronaut, she would be rationally compelled to prefer being a singer and receiving a $100 bonus to becoming an astronaut.

There is one key difference between the two examples considered above. Jane’s preferences can be extended , by adding new preferences without removing any of the ones she has, in a way that lets us represent her as an expected utility maximizer. On the other hand, there is no way of extended the self-torturer’s preferences so that he can be represented as an expected utility maximizer. Some of his preferences would have to be altered. One popular response to incomplete preferences is to claim that, while rational preferences need not satisfy the axioms of a given representation theorem (see section 2.2), it must be possible to extend them so that they satisfy the axioms. From this weaker requirement on preferences—that they be extendible to a preference ordering that satisfies the relevant axioms—one can prove the existence halves of the relevant representation theorems. However, one can no longer establish that each preference ordering has a representation which is unique up to allowable transformations.

No such response is available in the case of the self-torturer, whose preferences cannot be extended to satisfy the axioms of expected utility theory. See the entry on preferences for a more extended discussion of the self-torturer case.

3.2.2 Counterexamples Involving Independence

Allais (1953) and Ellsberg (1961) propose examples of preferences that cannot be represented by an expected utility function, but that nonetheless seem rational. Both examples involve violations of Savage’s Independence axiom:

Independence . Suppose that \(A\) and \(A^*\) are two acts that produce the same outcomes in the event that \(E\) is false. Then, for any act \(B\), one must have \(A\) is preferred to \(A^*\) if and only if \(A_E \amp B_{\sim E}\) is preferred to \(A^*_E \amp B_{\sim E}\) The agent is indifferent between \(A\) and \(A^*\) if and only if she is indifferent between \(A_E \amp B_{\sim E}\) and \(A^*_E \amp B_{\sim E}\)

In other words, if two acts have the same consequences whenever \(E\) is false, then the agent’s preferences between those two acts should depend only on their consequences when \(E\) is true. On Savage’s definition of expected utility, expected utility theory entails Independence. And on Jeffrey’s definition, expected utility theory entails Independence in the presence of the assumption that the states are probabilistically independent of the acts.

The first counterexample, the Allais Paradox, involves two separate decision problems in which a ticket with a number between 1 and 100 is drawn at random. In the first problem, the agent must choose between these two lotteries:

  • Lottery \(A\)
  • • $100 million with certainty
  • Lottery \(B\)
  • • $500 million if one of tickets 1–10 is drawn
  • • $100 million if one of tickets 12–100 is drawn
  • • Nothing if ticket 11 is drawn

In the second decision problem, the agent must choose between these two lotteries:

  • Lottery \(C\)
  • • $100 million if one of tickets 1–11 is drawn
  • • Nothing otherwise
  • Lottery \(D\)

It seems reasonable to prefer \(A\) (which offers a sure $100 million) to \(B\) (where the added 10% chance at $500 million is more than offset by the risk of getting nothing). It also seems reasonable to prefer \(D\) (an 10% chance at a $500 million prize) to \(C\) (a slightly larger 11% chance at a much smaller $100 million prize). But together, these preferences (call them the Allais preferences ) violate Independence. Lotteries \(A\) and \(C\) yield the same $100 million prize for tickets 12–100. They can be converted into lotteries \(B\) and \(D\) by replacing this $100 million prize with $0.

Because they violate Independence, the Allais preferences are incompatible with expected utility theory. This incompatibility does not require any assumptions about the relative utilities of the $0, the $100 million, and the $500 million. Where $500 million has utility \(x\), $100 million has utility \(y\), and $0 has utility \(z\), the expected utilities of the lotteries are as follows.

It is easy to see that the condition under which \(EU(A) \gt EU(B)\) is exactly the same as the condition under which \(EU(C) \gt EU(D)\): both inequalities obtain just in case \(0.11y \gt 0.10x + 0.01z\)

The Ellsberg Paradox also involves two decision problems that generate a violation of the sure-thing principle. In each of them, a ball is drawn from an urn containing 30 red balls, and 60 balls that are either white or yellow in unknown proportions. In the first decision problem, the agent must choose between the following lotteries:

  • Lottery \(R\)
  • • Win $100 if a red ball is drawn
  • • Lose $100 otherwise
  • Lottery \(W\)
  • • Win $100 if a white ball is drawn

In the second decision problem, the agent must choose between the following lotteries:

  • Lottery \(RY\)
  • • Win $100 if a red or yellow ball is drawn
  • Lottery \(WY\)
  • • Win $100 if a white or yellow ball is drawn

It seems reasonable to prefer \(R\) to \(W\), but at the same time prefer \(WY\) to \(RY\). (Call this combination of preferences the Ellsberg preferences .) Like the Allais preferences, the Ellsberg preferences violate Independence. Lotteries \(W\) and \(R\) yield a $100 loss if a yellow ball is drawn; they can be converted to lotteries \(RY\) and \(WY\) simply by replacing this $100 loss with a sure $100 gain.

Because they violate independence, the Ellsberg preferences are incompatible with expected utility theory. Again, this incompatibility does not require any assumptions about the relative utilities of winning $100 and losing $100. Nor do we need any assumptions about where between 0 and 1/3 the probability of drawing a yellow ball falls. Where winning $100 has utility \(w\) and losing $100 has utility \(l\),

It is easy to see that the condition in which \(EU(R) \gt EU(W)\) is exactly the same as the condition under which \(EU(RY) \gt EU(WY)\): both inequalities obtain just in case \(1/3\,w + P(W)l \gt 1/3\,l + P(W)w\).

There are three notable responses to the Allais and Ellsberg paradoxes. First, one might follow Savage (101 ff) and Raiffa (1968, 80–86), and defend expected utility theory on the grounds that the Allais and Ellsberg preferences are irrational.

Second, one might follow Buchak (2013) and claim that that the Allais and Ellsberg preferences are rationally permissible, so that expected utility theory fails as a normative theory of rationality. Buchak develops an a more permissive theory of rationality, with an extra parameter representing the decision-maker’s attitude toward risk. This risk parameter interacts with the utilities of outcomes and their conditional probabilities on acts to determine the values of acts. One setting of the risk parameter yields expected utility theory as a special case, but other, “risk-averse” settings rationalise the Allais preferences.

Third, one might follow Loomes and Sugden (1986), Weirich (1986), and Pope (1995) and argue that the outcomes in the Allais and Ellsberg paradoxes can be re-described to accommodate the Allais and Ellsberg preferences. The alleged conflict between the Allais and Ellsberg preferences on the one hand, and expected utility theory on the other, was based on the assumption that a given sum of money has the same utility no matter how it is obtained. Some authors challenge this assumption. Loomes and Sugden suggest that in addition to monetary amounts, the outcomes of the gambles include feelings of disappointment (or elation) at getting less (or more) than expected. Pope distinguishes “post-outcome” feelings of elation or disappointment from “pre-outcome” feelings of excitement, fear, boredom, or safety, and points out that both may affect outcome utilities. Weirich suggests that the value of a monetary sum depends partly on the risks that went into obtaining it, irrespective of the gambler’s feelings, so that (for instance) $100 million as the result of a sure bet is more than $100 million from a gamble that might have paid nothing.

Broome (1991, Ch. 5) raises a worry about this re-description solution. Any preferences can be justified by re-describing the space of outcomes, thus rendering the axioms of expected utility theory devoid of content. Broome rebuts this objection by suggesting an additional constraint on preference: if \(A\) is preferred to \(B\), then \(A\) and \(B\) must differ in some way that justifies preferring one to the other. An expected utility theorist can then count the Allais and Ellsberg preferences as rational if, and only if, there is a non-monetary difference that justifies placing outcomes of equal monetary value at different spots in one’s preference ordering.

3.2.3 Counterexamples Involving Probability 0 Events

Above, we’ve seen purported examples of rational preferences that violate expected utility theory. There are also purported examples of irrational preferences that satisfy expected utility theory.

On a typical understanding of expected utility theory, when two acts are tied for having the highest expected utility, agents are required to be indifferent between them. Skyrms (1980, p. 74) points out that this view lets us derive strange conclusions about events with probability 0. For instance, suppose you are about to throw a point-sized dart at a round dartboard. Classical probability theory countenances situations in which the dart has probability 0 of hitting any particular point. You offer me the following lousy deal: if the dart hits the board at its exact center, then you will charge me $100; otherwise, no money will change hands. My decision problem can be captured with the following matrix:

(\(P=0\)) (\(P=1\))
\(-100\) \(0\)
\(0\) \(0\)

Expected utility theory says that it is permissible for me to accept the deal—accepting has expected utility of 0. (This is so on both the Jeffrey definition and the Savage definition, if we assume that how the dart lands is probabilistically independent of how you bet.) But common sense says it is not permissible for me to accept the deal. Refusing weakly dominates accepting: it yields a better outcome in some states, and a worse outcome in no state.

Skyrms suggests augmenting the laws of classical probability with an extra requirement that only impossibilities are assigned probability 0. Easwaran (2014) argues that we should instead reject the view that expected utility theory commands indifference between acts with equal expected utility. Instead, expected utility theory is not a complete theory of rationality: when two acts have the same expected utility, it does not tell us which to prefer. We can use non-expected-utility considerations like weak dominance as tiebreakers.

3.2.4 Counterexamples Involving Unbounded Utility

A utility function \(U\) is bounded above if there is a limit to how good things can be according to \(U\), or more formally, if there is some least natural number \(sup\) such that for every \(A\) in \(U\)’s domain, \(U(A) \le sup\). Likewise, \(U\) is bounded below if there is a limit to how bad things can be according to \(U\), or more formally, if there is some greatest natural number \(inf\) such that for every \(A\) in \(U\)’s domain, \(U(A) \ge inf\). Expected utility theory can run into trouble when utility functions are unbounded above, below, or both.

One problematic example is the St. Petersburg game, originally published by Bernoulli. Suppose that a coin is tossed until it lands tails for the first time. If it lands tails on the first toss, you win $2; if it lands tails on the second toss, you win $4; if it lands tails on the third toss, you win $8, and if it lands tails on the \(n\)th toss, you win $\(2^n\). Assuming each dollar is worth one utile, the expected value of the St Petersburg game is

It turns out that this sum diverges; the St Petersburg game has infinite expected utility. Thus, according to expected utility theory, you should prefer the opportunity to play the St Petersburg game to any finite sum of money, no matter how large. Furthermore, since an infinite expected utility multiplied by any nonzero chance is still infinite, anything that has a positive probability of yielding the St Petersburg game has infinite expected utility. Thus, according to expected utility theory, you should prefer any chance at playing the St Petersburg game, however slim, to any finite sum of money, however large.

Nover and Hájek (2004) argue that in addition to the St. Petersburg game, which has infinite expected utility, there are other infinitary games whose expected utilities are undefined, even though rationality mandates certain preferences among them.

One response to these problematic infinitary games is to argue that the decision problems themselves are ill-posed (Jeffrey (1983, 154); another is to adopt a modified version of expected utility theory that agrees with its verdicts in the ordinary case, but yields intuitively reasonable verdicts about the infinitary games (Thalos and Richardson 2013) (Fine 2008) (Colyvan 2006, 2008) (Easwaran 2008).

4. Applications

In the 1940s and 50s, expected utility theory gained currency in the US for its potential to provide a mechanism that would explain the behavior of macro-economic variables. As it became apparent that expected utility theory did not accurately predict the behaviors of real people, its proponents instead advanced the view that it might serve instead as a theory of how rational people should respond to uncertainty (see Herfeld 2017).

Expected utility theory has a variety of applications in public policy. In welfare economics, Harsanyi (1953) reasons from expected utility theory to the claim that the most socially just arrangement is the one that maximizes total welfare distributed across a society society. The theory of expected utility also has more direct applications. Howard (1980) introduces the concept of a micromort , or a one-in-a-million chance of death, and uses expected utility calculations to gauge which mortality risks are acceptable. In health policy, quality-adjusted life years, or QALYs, are measures of the expected utilities of different health interventions used to guide health policy (see Weinstein et al 2009). McAskill (2015) uses expected utility theory to address the central question of effective altruism : “How can I do the most good?” (Utilties in these applications are most naturally interpreted as measuring something like happiness or wellbeing, rather than subjective preference satisfaction for an individual agent.)

Another area where expected utility theory finds applications is in insurance sales. Like casinos, insurance companies take on calculated risks with the aim of long-term financial gain, and must take into account the chance of going broke in the short run.

Utilitarians, along with their descendants contemporary consequentialists, hold that the rightness or wrongness of an act is determined by the moral goodness or badness of its consequences. Some consequentialists, such as (Railton 1984), interpret this to mean that we ought to do whatever will in fact have the best consequences. But it is difficult—perhaps impossible—to know the long-term consequences of our acts (Lenman 2000, Howard-Snyder 2007). In light of this observation, Jackson (1991) argues that the right act is the one with the greatest expected moral value, not the one that will in fact yield the best consequences.

As Jackson notes, the expected moral value of an act depends on which probability function we work with. Jackson argues that, while every probability function is associated with an “ought”, the “ought” that matters most to action is the one associated with the decision-maker’s degrees of belief at the time of action. Other authors claim priority for other “oughts”: Mason (2013) favors the probability function that is most reasonable for the agent to adopt in response to her evidence, given her epistemic limitations, while Oddie and Menzies (1992) favor the objective chance function as a measure of objective rightness. (They appeal to a more complicated probability function to define a notion of “subjective rightness” for decisionmakers who are ignorant of the objective chances.)

Still others (Smart 1973, Timmons 2002) argue that even if that we ought to do whatever will have the best consequences, expected utility theory can play the role of a decision procedure when we are uncertain what consequences our acts will have. Feldman (2006) objects that expected utility calculations are horribly impractical. In most real life decisions, the steps required to compute expected utilities are beyond our ken: listing the possible outcomes of our acts, assigning each outcome a utility and a conditional probability given each act, and performing the arithmetic necessary to expected utility calculations.

The expected-utility-maximizing version of consequentialism is not strictly speaking a theory of rational choice. It is a theory of moral choice, but whether rationality requires us to do what is morally best is up for debate.

Expected utility theory can be used to address practical questions in epistemology. One such question is when to accept a hypothesis. In typical cases, the evidence is logically compatible with multiple hypotheses, including hypotheses to which it lends little inductive support. Furthermore, scientists do not typically accept only those hypotheses that are most probable given their data. When is a hypothesis likely enough to deserve acceptance?

Bayesians, such as Maher (1993), suggest that this decision be made on expected utility grounds. Whether to accept a hypothesis is a decision problem, with acceptance and rejection as acts. It can be captured by the following decision matrix:

correctly accept erroneously accept
erroneously reject correctly reject

On Savage’s definition, the expected utility of accepting the hypothesis is determined by the probability of the hypothesis, together with the utilities of each of the four outcomes. (We can expect Jeffrey’s definition to agree with Savage’s on the plausible assumption that, given the evidence in our possession, the hypothesis is probabilistically independent of whether we accept or reject it.) Here, the utilities can be understood as purely epistemic values, since it is epistemically valuable to believe interesting truths, and to reject falsehoods.

Critics of the Bayesian approach, such as Mayo (1996), object that scientific hypotheses cannot sensibly be given probabilities. Mayo argues that in order to assign a useful probability to an event, we need statistical evidence about the frequencies of similar events. But scientific hypotheses are either true once and for all, or false once and for all—there is no population of worlds like ours from which we can meaningfully draw statistics. Nor can we use subjective probabilities for scientific purposes, since this would be unacceptably arbitrary. Therefore, the expected utilities of acceptance and rejection are undefined, and we ought to use the methods of traditional statistics, which rely on comparing the probabilities of our evidence conditional on each of the hypotheses.

Expected utility theory also provides guidance about when to gather evidence. Good (1967) argues on expected utility grounds that it is always rational to gather evidence before acting, provided that evidence is free of cost. The act with the highest expected utility after the extra evidence is in will always be always at least as good as the act with the highest expected utility beforehand.

In epistemic decision theory , expected utilities are used to assess belief states as rational or irrational. If we think of belief formation as a mental act, facts about the contents of the agent’s beliefs as events, and closeness to truth as a desirable feature of outcomes, then we can use expected utility theory to evaluate degrees of belief in terms of their expected closeness to truth. The entry on epistemic utility arguments for probabilism includes an overview of expected utility arguments for a variety of epistemic norms, including conditionalization and the Principal Principle.

Kaplan (1968), argues that expected utility considerations can be used to fix a standard of proof in legal trials. A jury deciding whether to acquit or convict faces the following decision problem:

true conviction false conviction
false acquittal true acquittal

Kaplan shows that \(EU(convict) > EU(acquit)\) whenever

Qualitatively, this means that the standard of proof increases as the disutility of convicting an innocent person \((U(\mathrm{true~conviction})-U(\mathrm{false~acquittal}))\) increases, or as the disutility of acquitting a guilty person \((U(\mathrm{true~acquittal})-U(\mathrm{false~conviction}))\) decreases.

Critics of this decision-theoretic approach, such as Laudan (2006), argue that it’s difficult or impossible to bridge the gap between the evidence admissible in court, and the real probability of the defendant’s guilt. The probability guilt depends on three factors: the distribution of apparent guilt among the genuinely guilty, the distribution of apparent guilt among the genuinely innocent, and the ratio of genuinely guilty to genuinely innocent defendants who go to trial (see Bell 1987). Obstacles to calculating any of these factors will block the inference from a judge or jury’s perception of apparent guilt to a true probability of guilt.

  • Allais M., 1953, “Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’École Americaine”, Econometrica , 21: 503–546.
  • Bell, R., 1987, “Decision Theory and Due Process: A Critique of the Supreme Court’s Lawmaking for Burdens of Proof”, Journal of Criminal Law and Criminology , 78: 557-585.
  • Bentham, J., 1961. An Introduction to the Principles of Morals and Legislation, Garden City: Doubleday. Originally published in 1789.
  • Bernoulli, D., 1738, “Specimen theoriae novae de mensura sortis”, Commentarii Academiae Scientiarum Imperialis Petropolitanae 5. Translated by Louise Somer and reprinted as “Exposition of a New Theory on the Measurement of Risk” 1954, Econometrica , 22: 23–36.
  • Bolker, E., 1966, “Functions Resembling Quotients of Measures”, Transactions of the American Mathematical Society , 2: 292–312.
  • Bradley, R., 2004, “Ramsey’s representation theorem”, Dialectica , 58: 483–497.
  • Broome, J., 1991, Weighing Goods: Equality, Uncertainty and Time , Oxford: Blackwell, doi:10.1002/9781119451266
  • Burch-Brown, J.M., 2014, “Clues for Consequentialists”, Utilitas , 26: 105-119.
  • Buchak, L., 2013, Risk and Rationality , Oxford: Oxford University Press.
  • Colyvan, M., 2006, “No Expectations”, Mind , 116: 695–702.
  • Colyvan, M., 2008, “Relative Expectation Theory”, Journal of Philosophy , 105: 37–44.
  • Easwaran, K., 2014, “Regularity and Hyperreal Credences”, The Philosophical Review , 123: 1–41.
  • Easwaran, K., 2008, “Strong and Weak Expectations”, Mind , 117: 633–641.
  • Elliott, E., 2017, “Ramsey without Ethical Neutrality: A New Representation Theorem”, Mind , 126: 1-51.
  • Ellsberg, D., 1961, “Risk, Ambiguity, and the Savage Axioms”, Quarterly Journal of Economics , 75: 643–669.
  • Feldman, F. 2006, “Actual utility, the objection from impracticality, and the move to expected utility”, Philosophical Studies , 129 : 49–79.
  • Fine, T., 2008, “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, Mind , 117: 613–632.
  • Good, I.J., 1967, “On the Principle of Total Evidence”, The British Journal for the Philosophy of Science , 17: 319–321
  • Greaves, H. 2016, “Cluelessness”, Proceedings of the Aristotelian Society , 116: 311-339.
  • Hampton, J., “The Failure of Expected-Utility Theory as a Theory of Reason”, Economics and Philosophy , 10: 195–242.
  • Harsanyi, J.C., 1953, “Cardinal utility in welfare economics and in the theory of risk-taking”, Journal of Political Economy , 61: 434–435.
  • Herfeld, C., “From Theories of Human Behavior to Rules of Rational Choice: Tracing a Normative Turn at the Cowles Commission, 1943-1954”, History of Political Economy , 50: 1-48.
  • Howard, R.A., 1980, “On Making Life and Death Decisions”, in R.C. Schwing and W.A. Albers, Societal Risk Assessment: How Safe is Safe Enough? , New York: Plenum Press.
  • Howard-Snyder, F., 1997, “The Rejection of Objective Consequentialism”, Utilitas , 9: 241–248.
  • Jackson, F., 1991, “Decision-theoretic consequentialism and the nearest and dearest objection”, Ethics , 101: 461–482.
  • Jeffrey, R., 1983, The Logic of Decision , 2 nd edition, Chicago: University of Chicago Press.
  • Jevons, W.S., 1866, “A General Mathematical Theory of Political Economy”, Journal of the Royal Statistical Society , 29: 282–287.
  • Joyce, J., 1999, The Foundations of Causal Decision Theory , Cambridge: Cambridge University Press.
  • Kahneman, D. & Tversky A., Judgment Under Uncertainty: Heuristics and Biases , New York: Cambridge University Press.
  • Kaplan, J., 1968, “Decision Theory and the Factfinding Process”, Stanford Law Review , 20: 1065-1092.
  • Kolmogorov, A. N., 1933, Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse Der Mathematik ; translated as Foundations of Probability , New York: Chelsea Publishing Company, 1950.
  • Laudan, L., 2006, Truth, Error, and Criminal Law , Cambridge: Cambridge University Press.
  • Lenman, J., 2000. “Consequentialism and cluelessness”, Philosophy and Public Affairs , 29(4): 342–370.
  • Lewis, D., 1981, “Causal Decision Theory”, Australasian Journal of Philosophy , 59: 5–30.
  • Levi, I., 1991, “Consequentialism and Sequential Choice”, in M. Bacharach and S. Hurley (eds.), Foundations of Decision Theory , Oxford: Basil Blackwell Ltd, 92–12.
  • Lindblom, C.E., 1959, “The Science of ‘Muddling Through’”, Public Administration Review , 19: 79–88.
  • Loomes, G. And Sugden, R., 1986, “Disappointment and Dynamic Consistency in Choice Under Uncertainty”, The Review of Economic Studies , 53(2): 271–282.
  • Maher, P., 1993, Betting on Theories , Cambridge: Cambridge University Press.
  • March, J.G. and Simon, H., 1958, Organizations , New York: Wiley.
  • Mason, E., 2013, “Objectivism and Prospectivism About Rightness”, Journal of Ethics and Social Philosophy , 7: 1–21.
  • Mayo, D., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McAskill, W., 2015, Doing Good Better , New York: Gotham Books.
  • McGee, V., 1991, “We Turing Machines Aren’t Expected-Utility Maximizers (Even Ideally)”, Philosophical Studies , 64: 115-123.
  • Meacham, C. and Weisberg, J., 2011, “Representation Theorems and the Foundations of Decision Theory”, Australasian Journal of Philosophy , 89: 641–663.
  • Menger, K., 1871, Grundsätze der Volkswirtschaftslehre , translated by James Dingwall and Bert F. Hoselitz as Principles of Economics , New York: New York University Press, 1976; reprinted online , Ludwig von Mises Institute, 2007.
  • Mill, J. S., 1861. Utilitarianism. Edited with an introduction by Roger Crisp. New York: Oxford University Press, 1998.
  • von Neumann, J., and Morgenstern, O., 1944, Theory of Games and Economic Behavior , Princeton: Princeton University Press.
  • Nover, H. & Hájek, A., 2004, “Vexing expectations”, Mind , 113: 237–249.
  • Nozick, R., 1969, “Newcomb’s Problem and Two Principles of Choice,” in Nicholas Rescher (ed.), Essays in Honor of Carl G. Hempel , Dordrecht: Reidel, 114–115.
  • Oliver, A., 2003, “A quantitative and qualitative test of the Allais paradox using health outcomes”, Journal of Economic Psychology , 24: 35–48.
  • Pope, R., 1995, “Towards a More Precise Decision Framework: A Separation of the Negative Utility of Chance from Diminishing Marginal Utility and the Preference for Safety”, Theory and Decision , 39: 241–265.
  • Raiffa, H., 1968, Decision analysis: Introductory lectures on choices under uncertainty , Reading, MA: Addison-Wesley.
  • Ramsey, F. P., 1926, “Truth and Probability”, in Foundations of Mathematics and other Essays, R. B. Braithwaite (ed.), London: Kegan, Paul, Trench, Trubner, & Co., 1931, 156–198; reprinted in Studies in Subjective Probability , H. E. Kyburg, Jr. and H. E. Smokler (eds.), 2nd edition, New York: R. E. Krieger Publishing Company, 1980, 23–52; reprinted in Philosophical Papers , D. H. Mellor (ed.), Cambridge: Cambridge University Press, 1990.
  • Savage, L.J., 1972, The Foundations of Statistics , 2 nd edition, New York: Dover Publications, Inc.
  • Sen, A., 1977, “Rational Fools: A Critique of the Behavioral Foundations of Economic Theory”, Philosophy and Public Affairs , 6: 317–344.
  • Shafer, G., 2007, “From Cournot’s principle to market efficiency”, in Augustin Cournot: Modelling Economics , Jean-Philippe Touffut (ed.), Cheltenham: Edward Elgar, 55–95.
  • Sidgwick, H., 1907. The Methods of Ethics, Seventh Edition. London: Macmillan; first edition, 1874.
  • Simon, H., 1956, “A Behavioral Model of Rational Choice”, The Quarterly Journal of Economics , 69: 99–118.
  • Skyrms, B., 1980. Causal Necessity: A Pragmatic Investigation of the Necessity of Laws , New Haven, CT: Yale University Press.
  • Smith, H.M., “Subjective Rightness”, Social and Political Philosophy , 27: 64-110.
  • Sobel, J.H., 1994, Taking Chances: Essays on Rational Choice , Cambridge: Cambridge University Press.
  • Spohn, W., 1977, “Where Luce and Krantz do really generalize Savage’s decision model”, Erkenntnis , 11: 113–134.
  • Srinivasan, A., 2015, “Normativity Without Cartesian Privilege”, Noûs , 25: 273-299.
  • Suppes, P., 2002, Representation and Invariance of Scientific Structures , Stanford: CSLI Publications.
  • Thalos, M. and Richardson, O., 2013, “Capitalization in the St. Petersburg game: Why statistical distributions matter”, Politics, Philosophy & Economics , 13: 292-313.
  • Weinstein, M.C., Torrence, G., and McGuire, A., 2009 “QALYs: the basics”, Value in Health , 12: S5–S9.
  • Weirich, P., 1986, “Expected Utility and Risk”, British Journal for the Philosophy of Science , 37: 419–442.
  • Zynda, L., 2000, “Representation Theorems and Realism about Degrees of Belief”, Philosophy of Science , 67: 45–69.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Decisions, Games, and Rational Choice , materials for a course taught in Spring 2008 by Robert Stalnaker, MIT OpenCourseWare.
  • Microeconomic Theory III , materials for a course taught in Spring 2010 by Muhamet Yildiz, MIT OpenCourseWare.
  • Choice Under Uncertainty , class lecture notes by Jonathan Levin.
  • Expected Utility Theory , by Philippe Mongin, entry for The Handbook of Economic Methodology.
  • The Origins of Expected Utility Theory , essay by Yvan Lengwiler.

decision theory | decision theory: causal | Pascal’s wager | preferences | probability, interpretations of | Ramsey, Frank: and intergenerational welfare economics | rational choice, normative: rivals to expected utility | risk

Copyright © 2023 by R. A. Briggs < formal . epistemology @ gmail . com >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.34(45); 2019 Nov 25

Logo of jkms

Scientific Hypotheses: Writing, Promoting, and Predicting Implications

Armen yuri gasparyan.

1 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK.

Lilit Ayvazyan

2 Department of Medical Chemistry, Yerevan State Medical University, Yerevan, Armenia.

Ulzhan Mukanova

3 Department of Surgical Disciplines, South Kazakhstan Medical Academy, Shymkent, Kazakhstan.

Marlen Yessirkepov

4 Department of Biology and Biochemistry, South Kazakhstan Medical Academy, Shymkent, Kazakhstan.

George D. Kitas

5 Arthritis Research UK Epidemiology Unit, University of Manchester, Manchester, UK.

Scientific hypotheses are essential for progress in rapidly developing academic disciplines. Proposing new ideas and hypotheses require thorough analyses of evidence-based data and predictions of the implications. One of the main concerns relates to the ethical implications of the generated hypotheses. The authors may need to outline potential benefits and limitations of their suggestions and target widely visible publication outlets to ignite discussion by experts and start testing the hypotheses. Not many publication outlets are currently welcoming hypotheses and unconventional ideas that may open gates to criticism and conservative remarks. A few scholarly journals guide the authors on how to structure hypotheses. Reflecting on general and specific issues around the subject matter is often recommended for drafting a well-structured hypothesis article. An analysis of influential hypotheses, presented in this article, particularly Strachan's hygiene hypothesis with global implications in the field of immunology and allergy, points to the need for properly interpreting and testing new suggestions. Envisaging the ethical implications of the hypotheses should be considered both by authors and journal editors during the writing and publishing process.

INTRODUCTION

We live in times of digitization that radically changes scientific research, reporting, and publishing strategies. Researchers all over the world are overwhelmed with processing large volumes of information and searching through numerous online platforms, all of which make the whole process of scholarly analysis and synthesis complex and sophisticated.

Current research activities are diversifying to combine scientific observations with analysis of facts recorded by scholars from various professional backgrounds. 1 Citation analyses and networking on social media are also becoming essential for shaping research and publishing strategies globally. 2 Learning specifics of increasingly interdisciplinary research studies and acquiring information facilitation skills aid researchers in formulating innovative ideas and predicting developments in interrelated scientific fields.

Arguably, researchers are currently offered more opportunities than in the past for generating new ideas by performing their routine laboratory activities, observing individual cases and unusual developments, and critically analyzing published scientific facts. What they need at the start of their research is to formulate a scientific hypothesis that revisits conventional theories, real-world processes, and related evidence to propose new studies and test ideas in an ethical way. 3 Such a hypothesis can be of most benefit if published in an ethical journal with wide visibility and exposure to relevant online databases and promotion platforms.

Although hypotheses are crucially important for the scientific progress, only few highly skilled researchers formulate and eventually publish their innovative ideas per se . Understandably, in an increasingly competitive research environment, most authors would prefer to prioritize their ideas by discussing and conducting tests in their own laboratories or clinical departments, and publishing research reports afterwards. However, there are instances when simple observations and research studies in a single center are not capable of explaining and testing new groundbreaking ideas. Formulating hypothesis articles first and calling for multicenter and interdisciplinary research can be a solution in such instances, potentially launching influential scientific directions, if not academic disciplines.

The aim of this article is to overview the importance and implications of infrequently published scientific hypotheses that may open new avenues of thinking and research.

Despite the seemingly established views on innovative ideas and hypotheses as essential research tools, no structured definition exists to tag the term and systematically track related articles. In 1973, the Medical Subject Heading (MeSH) of the U.S. National Library of Medicine introduced “Research Design” as a structured keyword that referred to the importance of collecting data and properly testing hypotheses, and indirectly linked the term to ethics, methods and standards, among many other subheadings.

One of the experts in the field defines “hypothesis” as a well-argued analysis of available evidence to provide a realistic (scientific) explanation of existing facts, fill gaps in public understanding of sophisticated processes, and propose a new theory or a test. 4 A hypothesis can be proven wrong partially or entirely. However, even such an erroneous hypothesis may influence progress in science by initiating professional debates that help generate more realistic ideas. The main ethical requirement for hypothesis authors is to be honest about the limitations of their suggestions. 5

EXAMPLES OF INFLUENTIAL SCIENTIFIC HYPOTHESES

Daily routine in a research laboratory may lead to groundbreaking discoveries provided the daily accounts are comprehensively analyzed and reproduced by peers. The discovery of penicillin by Sir Alexander Fleming (1928) can be viewed as a prime example of such discoveries that introduced therapies to treat staphylococcal and streptococcal infections and modulate blood coagulation. 6 , 7 Penicillin got worldwide recognition due to the inventor's seminal works published by highly prestigious and widely visible British journals, effective ‘real-world’ antibiotic therapy of pneumonia and wounds during World War II, and euphoric media coverage. 8 In 1945, Fleming, Florey and Chain got a much deserved Nobel Prize in Physiology or Medicine for the discovery that led to the mass production of the wonder drug in the U.S. and ‘real-world practice’ that tested the use of penicillin. What remained globally unnoticed is that Zinaida Yermolyeva, the outstanding Soviet microbiologist, created the Soviet penicillin, which turned out to be more effective than the Anglo-American penicillin and entered mass production in 1943; that year marked the turning of the tide of the Great Patriotic War. 9 One of the reasons of the widely unnoticed discovery of Zinaida Yermolyeva is that her works were published exclusively by local Russian (Soviet) journals.

The past decades have been marked by an unprecedented growth of multicenter and global research studies involving hundreds and thousands of human subjects. This trend is shaped by an increasing number of reports on clinical trials and large cohort studies that create a strong evidence base for practice recommendations. Mega-studies may help generate and test large-scale hypotheses aiming to solve health issues globally. Properly designed epidemiological studies, for example, may introduce clarity to the hygiene hypothesis that was originally proposed by David Strachan in 1989. 10 David Strachan studied the epidemiology of hay fever in a cohort of 17,414 British children and concluded that declining family size and improved personal hygiene had reduced the chances of cross infections in families, resulting in epidemics of atopic disease in post-industrial Britain. Over the past four decades, several related hypotheses have been proposed to expand the potential role of symbiotic microorganisms and parasites in the development of human physiological immune responses early in life and protection from allergic and autoimmune diseases later on. 11 , 12 Given the popularity and the scientific importance of the hygiene hypothesis, it was introduced as a MeSH term in 2012. 13

Hypotheses can be proposed based on an analysis of recorded historic events that resulted in mass migrations and spreading of certain genetic diseases. As a prime example, familial Mediterranean fever (FMF), the prototype periodic fever syndrome, is believed to spread from Mesopotamia to the Mediterranean region and all over Europe due to migrations and religious prosecutions millennia ago. 14 Genetic mutations spearing mild clinical forms of FMF are hypothesized to emerge and persist in the Mediterranean region as protective factors against more serious infectious diseases, particularly tuberculosis, historically common in that part of the world. 15 The speculations over the advantages of carrying the MEditerranean FeVer (MEFV) gene are further strengthened by recorded low mortality rates from tuberculosis among FMF patients of different nationalities living in Tunisia in the first half of the 20th century. 16

Diagnostic hypotheses shedding light on peculiarities of diseases throughout the history of mankind can be formulated using artefacts, particularly historic paintings. 17 Such paintings may reveal joint deformities and disfigurements due to rheumatic diseases in individual subjects. A series of paintings with similar signs of pathological conditions interpreted in a historic context may uncover mysteries of epidemics of certain diseases, which is the case with Ruben's paintings depicting signs of rheumatic hands and making some doctors to believe that rheumatoid arthritis was common in Europe in the 16th and 17th century. 18

WRITING SCIENTIFIC HYPOTHESES

There are author instructions of a few journals that specifically guide how to structure, format, and make submissions categorized as hypotheses attractive. One of the examples is presented by Med Hypotheses , the flagship journal in its field with more than four decades of publishing and influencing hypothesis authors globally. However, such guidance is not based on widely discussed, implemented, and approved reporting standards, which are becoming mandatory for all scholarly journals.

Generating new ideas and scientific hypotheses is a sophisticated task since not all researchers and authors are skilled to plan, conduct, and interpret various research studies. Some experience with formulating focused research questions and strong working hypotheses of original research studies is definitely helpful for advancing critical appraisal skills. However, aspiring authors of scientific hypotheses may need something different, which is more related to discerning scientific facts, pooling homogenous data from primary research works, and synthesizing new information in a systematic way by analyzing similar sets of articles. To some extent, this activity is reminiscent of writing narrative and systematic reviews. As in the case of reviews, scientific hypotheses need to be formulated on the basis of comprehensive search strategies to retrieve all available studies on the topics of interest and then synthesize new information selectively referring to the most relevant items. One of the main differences between scientific hypothesis and review articles relates to the volume of supportive literature sources ( Table 1 ). In fact, hypothesis is usually formulated by referring to a few scientific facts or compelling evidence derived from a handful of literature sources. 19 By contrast, reviews require analyses of a large number of published documents retrieved from several well-organized and evidence-based databases in accordance with predefined search strategies. 20 , 21 , 22

CharacteristicsHypothesisNarrative reviewSystematic review
Authors and contributorsAny researcher with interest in the topicUsually seasoned authors with vast experience in the subjectAny researcher with interest in the topic; information facilitators as contributors
RegistrationNot requiredNot requiredRegistration of the protocol with the PROSPERO registry ( ) is required to avoid redundancies
Reporting standardsNot availableNot availablePreferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standard ( )
Search strategySearches through credible databases to retrieve items supporting and opposing the innovative ideasSearches through multidisciplinary and specialist databases to comprehensively cover the subjectStrict search strategy through evidence-based databases to retrieve certain type of articles (e.g., reports on trials and cohort studies) with inclusion and exclusion criteria and flowcharts of searches and selection of the required articles
StructureSections to cover general and specific knowledge on the topic, research design to test the hypothesis, and its ethical implicationsSections are chosen by the authors, depending on the topicIntroduction, Methods, Results and Discussion (IMRAD)
Search tools for analysesNot availableNot availablePopulation, Intervention, Comparison, Outcome (Study Design) (PICO, PICOS)
ReferencesLimited numberExtensive listLimited number
Target journalsHandful of hypothesis journalsNumerousNumerous
Publication ethics issuesUnethical statements and ideas in substandard journals‘Copy-and-paste’ writing in some reviewsRedundancy of some nonregistered systematic reviews
Citation impactLow (with some exceptions)HighModerate

The format of hypotheses, especially the implications part, may vary widely across disciplines. Clinicians may limit their suggestions to the clinical manifestations of diseases, outcomes, and management strategies. Basic and laboratory scientists analysing genetic, molecular, and biochemical mechanisms may need to view beyond the frames of their narrow fields and predict social and population-based implications of the proposed ideas. 23

Advanced writing skills are essential for presenting an interesting theoretical article which appeals to the global readership. Merely listing opposing facts and ideas, without proper interpretation and analysis, may distract the experienced readers. The essence of a great hypothesis is a story behind the scientific facts and evidence-based data.

ETHICAL IMPLICATIONS

The authors of hypotheses substantiate their arguments by referring to and discerning rational points from published articles that might be overlooked by others. Their arguments may contradict the established theories and practices, and pose global ethical issues, particularly when more or less efficient medical technologies and public health interventions are devalued. The ethical issues may arise primarily because of the careless references to articles with low priorities, inadequate and apparently unethical methodologies, and concealed reporting of negative results. 24 , 25

Misinterpretation and misunderstanding of the published ideas and scientific hypotheses may complicate the issue further. For example, Alexander Fleming, whose innovative ideas of penicillin use to kill susceptible bacteria saved millions of lives, warned of the consequences of uncontrolled prescription of the drug. The issue of antibiotic resistance had emerged within the first ten years of penicillin use on a global scale due to the overprescription that affected the efficacy of antibiotic therapies, with undesirable consequences for millions. 26

The misunderstanding of the hygiene hypothesis that primarily aimed to shed light on the role of the microbiome in allergic and autoimmune diseases resulted in decline of public confidence in hygiene with dire societal implications, forcing some experts to abandon the original idea. 27 , 28 Although that hypothesis is unrelated to the issue of vaccinations, the public misunderstanding has resulted in decline of vaccinations at a time of upsurge of old and new infections.

A number of ethical issues are posed by the denial of the viral (human immunodeficiency viruses; HIV) hypothesis of acquired Immune deficiency Syndrome (AIDS) by Peter Duesberg, who overviewed the links between illicit recreational drugs and antiretroviral therapies with AIDS and refuted the etiological role of HIV. 29 That controversial hypothesis was rejected by several journals, but was eventually published without external peer review at Med Hypotheses in 2010. The publication itself raised concerns of the unconventional editorial policy of the journal, causing major perturbations and more scrutinized publishing policies by journals processing hypotheses.

WHERE TO PUBLISH HYPOTHESES

Although scientific authors are currently well informed and equipped with search tools to draft evidence-based hypotheses, there are still limited quality publication outlets calling for related articles. The journal editors may be hesitant to publish articles that do not adhere to any research reporting guidelines and open gates for harsh criticism of unconventional and untested ideas. Occasionally, the editors opting for open-access publishing and upgrading their ethics regulations launch a section to selectively publish scientific hypotheses attractive to the experienced readers. 30 However, the absence of approved standards for this article type, particularly no mandate for outlining potential ethical implications, may lead to publication of potentially harmful ideas in an attractive format.

A suggestion of simultaneously publishing multiple or alternative hypotheses to balance the reader views and feedback is a potential solution for the mainstream scholarly journals. 31 However, that option alone is hardly applicable to emerging journals with unconventional quality checks and peer review, accumulating papers with multiple rejections by established journals.

A large group of experts view hypotheses with improbable and controversial ideas publishable after formal editorial (in-house) checks to preserve the authors' genuine ideas and avoid conservative amendments imposed by external peer reviewers. 32 That approach may be acceptable for established publishers with large teams of experienced editors. However, the same approach can lead to dire consequences if employed by nonselective start-up, open-access journals processing all types of articles and primarily accepting those with charged publication fees. 33 In fact, pseudoscientific ideas arguing Newton's and Einstein's seminal works or those denying climate change that are hardly testable have already found their niche in substandard electronic journals with soft or nonexistent peer review. 34

CITATIONS AND SOCIAL MEDIA ATTENTION

The available preliminary evidence points to the attractiveness of hypothesis articles for readers, particularly those from research-intensive countries who actively download related documents. 35 However, citations of such articles are disproportionately low. Only a small proportion of top-downloaded hypotheses (13%) in the highly prestigious Med Hypotheses receive on average 5 citations per article within a two-year window. 36

With the exception of a few historic papers, the vast majority of hypotheses attract relatively small number of citations in a long term. 36 Plausible explanations are that these articles often contain a single or only a few citable points and that suggested research studies to test hypotheses are rarely conducted and reported, limiting chances of citing and crediting authors of genuine research ideas.

A snapshot analysis of citation activity of hypothesis articles may reveal interest of the global scientific community towards their implications across various disciplines and countries. As a prime example, Strachan's hygiene hypothesis, published in 1989, 10 is still attracting numerous citations on Scopus, the largest bibliographic database. As of August 28, 2019, the number of the linked citations in the database is 3,201. Of the citing articles, 160 are cited at least 160 times ( h -index of this research topic = 160). The first three citations are recorded in 1992 and followed by a rapid annual increase in citation activity and a peak of 212 in 2015 ( Fig. 1 ). The top 5 sources of the citations are Clin Exp Allergy (n = 136), J Allergy Clin Immunol (n = 119), Allergy (n = 81), Pediatr Allergy Immunol (n = 69), and PLOS One (n = 44). The top 5 citing authors are leading experts in pediatrics and allergology Erika von Mutius (Munich, Germany, number of publications with the index citation = 30), Erika Isolauri (Turku, Finland, n = 27), Patrick G Holt (Subiaco, Australia, n = 25), David P. Strachan (London, UK, n = 23), and Bengt Björksten (Stockholm, Sweden, n = 22). The U.S. is the leading country in terms of citation activity with 809 related documents, followed by the UK (n = 494), Germany (n = 314), Australia (n = 211), and the Netherlands (n = 177). The largest proportion of citing documents are articles (n = 1,726, 54%), followed by reviews (n = 950, 29.7%), and book chapters (n = 213, 6.7%). The main subject areas of the citing items are medicine (n = 2,581, 51.7%), immunology and microbiology (n = 1,179, 23.6%), and biochemistry, genetics and molecular biology (n = 415, 8.3%).

An external file that holds a picture, illustration, etc.
Object name is jkms-34-e300-g001.jpg

Interestingly, a recent analysis of 111 publications related to Strachan's hygiene hypothesis, stating that the lack of exposure to infections in early life increases the risk of rhinitis, revealed a selection bias of 5,551 citations on Web of Science. 37 The articles supportive of the hypothesis were cited more than nonsupportive ones (odds ratio adjusted for study design, 2.2; 95% confidence interval, 1.6–3.1). A similar conclusion pointing to a citation bias distorting bibliometrics of hypotheses was reached by an earlier analysis of a citation network linked to the idea that β-amyloid, which is involved in the pathogenesis of Alzheimer disease, is produced by skeletal muscle of patients with inclusion body myositis. 38 The results of both studies are in line with the notion that ‘positive’ citations are more frequent in the field of biomedicine than ‘negative’ ones, and that citations to articles with proven hypotheses are too common. 39

Social media channels are playing an increasingly active role in the generation and evaluation of scientific hypotheses. In fact, publicly discussing research questions on platforms of news outlets, such as Reddit, may shape hypotheses on health-related issues of global importance, such as obesity. 40 Analyzing Twitter comments, researchers may reveal both potentially valuable ideas and unfounded claims that surround groundbreaking research ideas. 41 Social media activities, however, are unevenly distributed across different research topics, journals and countries, and these are not always objective professional reflections of the breakthroughs in science. 2 , 42

Scientific hypotheses are essential for progress in science and advances in healthcare. Innovative ideas should be based on a critical overview of related scientific facts and evidence-based data, often overlooked by others. To generate realistic hypothetical theories, the authors should comprehensively analyze the literature and suggest relevant and ethically sound design for future studies. They should also consider their hypotheses in the context of research and publication ethics norms acceptable for their target journals. The journal editors aiming to diversify their portfolio by maintaining and introducing hypotheses section are in a position to upgrade guidelines for related articles by pointing to general and specific analyses of the subject, preferred study designs to test hypotheses, and ethical implications. The latter is closely related to specifics of hypotheses. For example, editorial recommendations to outline benefits and risks of a new laboratory test or therapy may result in a more balanced article and minimize associated risks afterwards.

Not all scientific hypotheses have immediate positive effects. Some, if not most, are never tested in properly designed research studies and never cited in credible and indexed publication outlets. Hypotheses in specialized scientific fields, particularly those hardly understandable for nonexperts, lose their attractiveness for increasingly interdisciplinary audience. The authors' honest analysis of the benefits and limitations of their hypotheses and concerted efforts of all stakeholders in science communication to initiate public discussion on widely visible platforms and social media may reveal rational points and caveats of the new ideas.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Gasparyan AY, Yessirkepov M, Kitas GD.
  • Methodology: Gasparyan AY, Mukanova U, Ayvazyan L.
  • Writing - original draft: Gasparyan AY, Ayvazyan L, Yessirkepov M.
  • Writing - review & editing: Gasparyan AY, Yessirkepov M, Mukanova U, Kitas GD.

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    hypothesis normative research

  2. How to Write a Research Hypothesis: A Comprehensive Step-by-Step Guide

    hypothesis normative research

  3. What is a Hypothesis

    hypothesis normative research

  4. 😍 How to formulate a hypothesis in research. How to Formulate

    hypothesis normative research

  5. Research Question vs Hypothesis: Difference and Comparison

    hypothesis normative research

  6. how to write a hypothesis in lab report

    hypothesis normative research

VIDEO

  1. Scientific Research V/S Normative Research: Lecture-7

  2. 6

  3. Importance of hypothesis in research #bsc nursing # nursing research

  4. A normative data-set of Timed Up and Go component times under different conditions

  5. Normative & Empirical political theory

  6. Developing a Quantitative Research Plan: Hypotheses

COMMENTS

  1. Using Normative Language When Describing Scientific Findings: Protocol for a Randomized Controlled Trial of Effects on Trust and Credibility

    The discussion of research often includes both normative language (what one ought to do based on a study's findings) and cognitive language (what a study found), but these types of claims are very different, since normative claims make assumptions about people's interests. ... (Hypothesis 1) and each of the single-item measures (Hypotheses ...

  2. Using Normative Language When Describing Scientific Findings

    Hypotheses 2-5: The perceived credibility of the scientist who conducted the study, credibility of the research, trust in the scientific information on the post, and trust in scientific information coming from the author of the post will each be significantly lower in the intervention arm (cognitive and normative claims) than the control arm ...

  3. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  4. Formulating Hypotheses for Different Study Designs

    Formulating Hypotheses for Different Study Designs. Generating a testable working hypothesis is the first step towards conducting original research. Such research may prove or disprove the proposed hypothesis. Case reports, case series, online surveys and other observational studies, clinical trials, and narrative reviews help to generate ...

  5. PDF Normative and conceptual ELSI research: what it is, and why ...

    Normative research seeks to discover, and inform or persuade people, what they ought to do, according to some set of norms ... Just as an hypothesis may not be supported or may be disproved, a ...

  6. Types of Research Questions: Descriptive, Predictive, or Causal

    A previous Evidence in Practice article explained why a specific and answerable research question is important for clinicians and researchers. Determining whether a study aims to answer a descriptive, predictive, or causal question should be one of the first things a reader does when reading an article. Any type of question can be relevant and useful to support evidence-based practice, but ...

  7. Research Hypothesis: Definition, Types, Examples and Quick Tips

    Simple hypothesis. A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, "Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking. 4.

  8. (PDF) FORMULATING AND TESTING HYPOTHESIS

    A simple research hypothesis predicts a relationship between tw o variables. From your study of variables, it should be clear that ... In normative survey research the investigator may or may not ...

  9. Normative and conceptual ELSI research: what it is, and why it's

    Normative research seeks to discover, and inform or persuade people, what they ought to do, according to some set of norms or values. These may include ethical, legal, religious, and cultural values.

  10. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  11. The normative background of empirical-ethical research: first steps

    Background Empirical-ethical research constitutes a relatively new field which integrates socio-empirical research and normative analysis. As direct inferences from descriptive data to normative conclusions are problematic, an ethical framework is needed to determine the relevance of the empirical data for normative argument. While issues of normative-empirical collaboration and questions of ...

  12. What methods do reviews of normative ethics literature use for search

    (Semi-)systematic approaches to finding, analysing, and synthesising ethics literature on medical topics are still in their infancy. However, our recent systematic review showed that the rate of publication of such (semi-)systematic reviews has increased in the last two decades. This is not only true for reviews of empirical ethics literature, but also for reviews of normative ethics literature.

  13. Normativity of Predictions: A New Research Perspective

    Based on the hypothesis it has formulated, the cognitive system takes relevant action which is supposed to interfere with the causal structure of the world in a way that will make the hypothesis or prediction probable or true (Clark, 2016, p. 116). In this sense, a relevant prediction serves a specific normative function which should be ...

  14. Normative empirical concepts

    ABSTRACT. Economists use a variety of normative empirical concepts because the economy and morality are intertwined. Often, this normativity is intended and widely acknowledged, signaling the relevance and meaning of research. Sometimes, the objectivity of research and the findings obtained by using normative concepts is problematic.

  15. The Normative Case Study1

    Abstract. The case study is one of the major research strategies in contemporary social science. Although most discussions of case study research presume that cases contribute to explanatory theory, this article draws from recent literature about ethical reasoning to argue that case studies can also contribute to normative theory—to theories ...

  16. Normative authority for empirical science

    In this article I explore the hypothesis of normative authority by epistemic authority. This is the idea that scientifically warranted claims in psychology, in being claims about human needs, interests, and concerns, can acquire authority on which values do or do not merit endorsement. The hypothesis is applied to attachment research: it seems ...

  17. Rational, normative, descriptive, prescriptive, or choice behavior? The

    Decision making, integral to everyday behavior, is the subject of thousands of studies each year. Its long history has led to the emergence of several competing models in the cognitive literature. Meanwhile, behaviorist analysts have carefully studied the mechanisms underlying choice behavior, including the value of reinforcement. Criteria for comparing and contrasting competing models of ...

  18. Normative explanation unchained

    Normative theories aim to explain why things have the normative features they have. This paper argues that, contrary to some plausible existing views, one important kind of normative explanations which first-order normative theories aim to formulate and defend can fail to transmit downward along chains of metaphysical determination of normative facts by non-normative facts.

  19. The Psychology of Normative Cognition

    The internalization hypothesis can then be construed as a claim that internalized norms are intrinsically motivating for the simple reason that it is a fundamental psychological feature of normative psychology that once a norm has been acquired, delivered to, and represented in a person's norm database, the norm system automatically confers ...

  20. Normative Theories of Rational Choice: Expected Utility

    Two-boxing dominates one-boxing: in every state, two-boxing yields a better outcome. Yet on Jeffrey's definition of conditional probability, one-boxing has a higher expected utility than two-boxing. There is a high conditional probability of finding $1 million is in the closed box, given that you one-box, so one-boxing has a high expected utility.

  21. Normative Science

    Normative science is defined as "information that is developed, presented or interpreted based on an assumed, usually unstated, preference for a particular policy choice.". Using normative science in policy deliberations is stealth advocacy. I use "stealth" because the average person reading or listening to such scientific statements is ...

  22. Normative Methodology

    Oxford Handbooks. Collection: Oxford Handbooks Online. Modern political philosophy begins with Thomas Hobbes, David Hume, and others who train their focus on the individual and on interactions between individuals. The purpose of politics in their view is to regulate the behavior of individuals to enable them to be peaceful and productive.

  23. Scientific Hypotheses: Writing, Promoting, and Predicting Implications

    What they need at the start of their research is to formulate a scientific hypothesis that revisits conventional theories, real-world processes, and related evidence to propose new studies and test ideas in an ethical way.3 Such a hypothesis can be of most benefit if published in an ethical journal with wide visibility and exposure to relevant ...

  24. Normative science

    Normative science. In the applied sciences, normative science is a type of information that is developed, presented, or interpreted based on an assumed, usually unstated, preference for a particular outcome, policy or class of policies or outcomes. [1] Regular or traditional science does not presuppose a policy preference, but normative science ...