Research-Methodology

Inductive Approach (Inductive Reasoning)

Inductive approach, also known in inductive reasoning, starts with the observations and theories are proposed towards the end of the research process as a result of observations [1] .  Inductive research “involves the search for pattern from observation and the development of explanations – theories – for those patterns through series of hypotheses” [2] . No theories or hypotheses would apply in inductive studies at the beginning of the research and the researcher is free in terms of altering the direction for the study after the research process had commenced.

It is important to stress that inductive approach does not imply disregarding theories when formulating research questions and objectives. This approach aims to generate meanings from the data set collected in order to identify patterns and relationships to build a theory; however, inductive approach does not prevent the researcher from using existing theory to formulate the research question to be explored. [3] Inductive reasoning is based on learning from experience. Patterns, resemblances and regularities in experience (premises) are observed in order to reach conclusions (or to generate theory).

Application of Inductive Approach (Inductive Reasoning) in Business Research

Inductive reasoning begins with detailed observations of the world, which moves towards more abstract generalisations and ideas [4] . When following an inductive approach, beginning with a topic, a researcher tends to develop empirical generalisations and identify preliminary relationships as he progresses through his research. No hypotheses can be found at the initial stages of the research and the researcher is not sure about the type and nature of the research findings until the study is completed.

As it is illustrated in figure below, “inductive reasoning is often referred to as a “bottom-up” approach to knowing, in which the researcher uses observations to build an abstraction or to describe a picture of the phenomenon that is being studied” [5]

Inductive approach (inductive reasoning)

Here is an example:

My nephew borrowed $100 last June but he did not pay back until September as he had promised (PREMISE). Then he assured me that he will pay back until Christmas but he didn’t (PREMISE). He also failed in to keep his promise to pay back in March (PREMISE). I reckon I have to face the facts. My nephew is never going to pay me back (CONCLUSION).

Generally, the application of inductive approach is associated with qualitative methods of data collection and data analysis, whereas deductive approach is perceived to be related to quantitative methods . The following table illustrates such a classification from a broad perspective:

 
Deduction

Objectivity

Causation

Induction

Subjectivity

Meaning

Pre-specified

Outcome-oriented

Open-ended

Process-oriented

Numerical estimation

Statistical inference

Narrative description

Constant comparison

However, the statement above is not absolute, and in some instances inductive approach can be adopted to conduct a quantitative research as well. The following table illustrates patterns of data analysis according to type of research and research approach .

 
Grounded theory Exploratory data analysis
Qualitative comparative analysis Structural equation modeling

When writing a dissertation in business studies it is compulsory to specify the approach of are adopting. It is good to include a table comparing inductive and deductive approaches similar to one below [6] and discuss the impacts of your choice of inductive approach on selection of primary data collection methods and research process.

“Top-Down” “Bottom-Up”
Prediction changes, validating  theoretical construct, focus in “mean” behaviour, testing assumptions and hypotheses, constructing most likely future Understanding dynamics, robustness, emergence, resilience, focus on individual behaviour, constructing alterative futures
Single

(one landscape, one resolution)

Multiple

(multiple landscape, one resolution)

Multiple

(deterministic)

Multiple

(stochastic)

Single

(homogenous preferences)

Multiple

(heterogeneous preferences)

Single

(core aggregation scale)

Single or multiple

(one or more aggregation scales)

High – Low

(one likely future)

Low-High

(many likely futures)

Low

(group or partial attributes)

High

(individual or group attributes)

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research approaches. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,  research design ,  methods of data collection ,  data analysis  and  sampling  are explained in this e-book in simple words.

John Dudovskiy

Inductive approach (inductive reasoning)

[1] Goddard, W. & Melville, S. (2004) “Research Methodology: An Introduction” 2nd edition, Blackwell Publishing

[2] Bernard, H.R. (2011) “Research Methods in Anthropology” 5 th edition, AltaMira Press, p.7

[3] Saunders, M., Lewis, P. & Thornhill, A. (2012) “Research Methods for Business Students” 6 th  edition, Pearson Education Limited

[4] Neuman, W.L. (2003) “Social Research Methods: Qualitative and Quantitative Approaches” Allyn and Bacon

[5] Lodico, M.G., Spaulding, D.T &Voegtle, K.H. (2010) “Methods in Educational Research: From Theory to Practice” John Wiley & Sons, p.10

[6] Source: Alexandiris, K.T. (2006) “Exploring Complex Dynamics in Multi Agent-Based Intelligent Systems” Pro Quest

why does qualitative research use inductive reasoning

The Ultimate Guide to Qualitative Research - Part 2: Handling Qualitative Data

why does qualitative research use inductive reasoning

  • Handling qualitative data
  • Transcripts
  • Field notes
  • Survey data and responses
  • Visual and audio data
  • Data organization
  • Data coding
  • Coding frame
  • Auto and smart coding
  • Organizing codes
  • Qualitative data analysis
  • Content analysis
  • Thematic analysis
  • Thematic analysis vs. content analysis
  • Narrative research
  • Phenomenological research
  • Discourse analysis
  • Grounded theory
  • Deductive reasoning
  • Introduction

Inductive logic

What is inductive reasoning, inductive vs. deductive reasoning, the inductive approach in the research process, further inquiry.

  • Qualitative data interpretation
  • Qualitative data analysis software

Inductive reasoning and analysis

If you conduct research inductively, you derive a theory from your observations. A quantitative study can follow up an inductive analysis to substantiate an observation to generalize your theory to a population.

Inductive reasoning is an analytical approach that involves proposing a broader theory about the research topic based on the data that you use in your study. Inductive reasoning is a bottom-up approach where researchers construct knowledge and propose new theory that emerges from the data.

why does qualitative research use inductive reasoning

Inductive and deductive reasoning goes hand in hand to allow researchers to develop a theoretical understanding of the human and social world. Let's look more closely at the concept of inductive reasoning and how it applies to research and in ATLAS.ti.

When people make specific observations about a particular phenomenon and draw conclusions based solely on the substance of those observations, they engage in a form of reasoning called inductive logic. Those conclusions can serve their working theory until other specific observations challenge or contradict their understanding.

They must then further develop their understanding into a more nuanced and coherent conclusion that accommodates their broadened observations of the world. Ultimately, the inductive method aims to construct a theory that explains relationships among the studied concepts or phenomena.

Inductive reasoning examples

Inductive reasoning becomes easier to understand as a bottom-up approach to logic. To take an example from everyday life, if one were to see a cat, notice that it has a tail, and come across other creatures that have tails, then they can reach a generalized conclusion through inductive reference based on their observations: all animals with a tail are cats.

Obviously, this does not mean that the proposed theory is the end of the inductive reasoning process. They can find a dog with a tail, but they would be hard-pressed to call it a cat.

As a result, the theory they have developed from previous experience could provide a better explanation. That person would have to conduct new observations of cats and dogs to make a further inductive inference: cats and dogs have tails, but cats have sharper claws. The cycle of inductive reasoning can thus continue indefinitely to identify patterns and develop more robust theories.

Another famous example is that of the black swan. You can inductively conclude that all swans are white if you have only observed white swans so far.

This theory must be thrown out when you encounter a black swan. Then you need to revise your theory to account for the new observation.

why does qualitative research use inductive reasoning

The role of inductive reasoning in research is not always readily apparent if you only look at experimental research as a means for developing theory. Experimental research depends on deductive reasoning to confirm or dispute an existing theory, while inductive reasoning is most associated with observations and interviews.

Observation and inductive logic are most appropriate in research inquiries where the existing theory is not sufficiently developed or developed at all, requiring researchers to develop an inductive explanation about the phenomenon they are studying.

Especially in social science research, it's impossible to come to a necessarily final conclusion to the inductive reasoning process. Knowledge is always in constant development thanks to research.

Objective of inductive reasoning

The objectives of the inductive approach are to build theories from a set of data that allow researchers to make a general statement about a phenomenon while also opening up new lines of inquiry for future research.

It is also important to note that inductive research need not exist independent of existing theory. The research process always calls for connections to the existing literature to organize and generate knowledge. The main principle in applying inductive reasoning to your research is that the inferences you establish come from the data you analyze.

Is inductive analysis qualitative or quantitative?

Inductive reasoning is often associated with qualitative research , where the objective is to examine contexts, processes, or meanings that are not easily quantifiable. Quantitative analysis, on the other hand, tends to rely on deductive reasoning to test existing theories to suggest when established knowledge requires further development.

That said, inductive reasoning skills can be used with quantitative methods to form hypotheses based on the data . The important premise of an inductive approach is that propositions and theories are generated from the patterns of a phenomenon in a particular body of data.

Frequencies and themes

Patterns that occur in abundance across observations or interviews may be useful in developing theory. In addition, qualitative researchers may also identify patterns or themes that appear only once or twice but that shed important light on the phenomenon under study.

ATLAS.ti, for example, has tools such as the Word Cloud to count the frequency of words. If you use a transcript of a speech, you can employ the Word Cloud tool and apply inductive reasoning to make a logical conclusion about a speaker's speech patterns based on the words they use most often.

why does qualitative research use inductive reasoning

Deductive and inductive research are contrasting but complementary approaches used in scientific work. To clarify the difference, deductive approaches examine theoretical inferences from the top to bottom, while inductive methods aim to generate theoretical inferences from the bottom up. In other words, deductive reasoning works with current facts, while inductive reasoning seeks to create a new set of facts.

Looking at cats and dogs

To return to the example about cats and dogs, an example of a deductive inference would be one that uses an existing conclusion that all cats have tails and sharp claws. As a result, if someone finds an animal with a tail and sharp claws, they can employ deductive reasoning based on the above conclusion to call that animal a cat. Naturally, the more refined the theories employed, the more a researcher can rely on deductive reasoning.

The two approaches are not mutually exclusive and can be combined in the same scientific study. You can, for instance, build a code system starting with some deductively derived concepts, which you enrich throughout the analysis process with codes that you develop from the data inductively. In this sense, inductive and deductive reasoning both contribute to the analysis of your research.

You can use the Code Manager in ATLAS.ti to differentiate between the two sets of codes to organize inductive and deductive approaches in the same project. Colors and code groups can help you distinguish between the different kinds of codes you use to conduct your analysis.

For more complex research projects, smart codes can also facilitate the organization of your research by identifying segments of data that meet a certain set of criteria based on your codes.

why does qualitative research use inductive reasoning

Organize projects large and small with ATLAS.ti

Our interface makes it easy to make sense of your research. See how with a free trial.

The research process can often be divided into data collection and data analysis . In qualitative research , coding is typically the intermediary step that facilitates analysis, moving you forward in developing conclusions and explaining them using theories.

Data collection

Inductive reasoning can be applied to most methods of data collection. That said, qualitative research methods that call for observations or interactions with research participants allow the researcher to employ inductive reasoning during data collection.

Imagine an interview project to determine the effects of social media usage. In initial interviews with people, the researcher may notice that many respondents mention physical effects like eye strain or lack of sleep. When the researcher believes there is a connection, they may adjust the questions they ask respondents to find more evidence of this causal relationship.

Similarly, with observations , a researcher employs inductive reasoning when they notice something that occurs frequently. For example, they might notice that people using smartphones in public tend to get in more accidents (e.g., bumping into others or tripping over objects). As a result, they can adjust their observations by going to crowded places where it is more likely people using smartphones might suffer more accidents.

An inductive reasoning approach to qualitative data analysis requires looking at your project to identify key segments of data that will ultimately serve as the premises for your development of theory. The theory can be further developed after identifying patterns and adjusting the focus to look for more evidence of or exceptions to those patterns.

why does qualitative research use inductive reasoning

In ATLAS.ti, the process for employing an inductive approach starts with looking at your data. What patterns seem apparent? What shows up in the data? What instances of data appear most relevant to your research inquiry?

Give each pattern a short but descriptive label that forms one of your codes. Codes are short because they help summarize large segments for quick understanding or to categorize discrete segments in separate areas of your research project.

These codes can be created directly in the Code Manager, or you may find it easier to create codes while reading the data. As you read through your project, you can create new codes and then apply them to segments of data that are called quotations. Quotations given the same code can be said to be related to each other by the same broader pattern, thus establishing connections between different data segments with the same code.

As an example of this relationship, imagine you are coding a set of documents that contain people's schedules in everyday life. These schedules might mention activities such as "tennis practice," "doctor's appointment," and "movie night with partner." Looking at these schedules, you might want to apply codes such as "fun activities" and "important tasks" to these items to get a sense of how often each category of activity occurs in people's everyday routines.

Auto-coding

Coding your data can be a time-consuming process, but required when applying inductive reasoning to your research data. Traditionally, researchers code one document, or source of data, at a time.

In ATLAS.ti, tools like the Text Search function can quicken the coding process by allowing researchers to search for a specific word or phrase in their project and code segments containing their desired search term. If a particular code can be represented by a certain word or phrase, the Text Search tool can allow you to organize the relevant data in one place for quick and easy coding. You can use the Word Cloud to inductively identify specific words or phrases and then code for these using the Text Search tool.

why does qualitative research use inductive reasoning

The Text Search function also works with deductive reasoning, particularly when existing theories can be associated with particular words or phrases you can look for in your project. Whatever the approach, ATLAS.ti can help you save time in coding your research.

Further data analysis

Once your data has been coded, you can look at the Code Manager to examine which codes have been used the most. This will aid the inductive reasoning process by identifying what occurs the most often in your data.

Not only can you apply inductive reasoning through the occurrence of codes, but also the co-occurrence of codes as well. Keep in mind that quotations can contain multiple codes and that quotations with different codes can overlap.

When text is associated with more than one code, those codes co-occur with each other. Researchers can use that co-occurrence to infer relationships between different phenomena.

ATLAS.ti has a tool called Code Co-Occurrence Analysis , where you can examine codes generated through inductive reasoning and identify potential relationships between those codes. The Code Co-Occurrence table lists the frequencies for different pairs of codes that you specify in ATLAS.ti.

Drawing conclusions

Codes based on inductive reasoning are often brought together into a theory or framework. You can look at both frequently occurring codes as well as codes that appear even only once or twice to build premises for your theory. What is most important is that the different parts of your theory fit together in a coherent manner and explain the phenomenon under study. Building conclusions relies on first drawing tentative conclusions and then verifying these conclusions in the data. You might adjust your conclusions as you find different examples or disconfirming evidence. This iterative process contributes to building a meaningful theory or framework.

Frequencies of code co-occurrence represent potential relationships between codes that are potentially useful to theoretical development. The frequency counts for codes and code co-occurrences can all be exported into Microsoft Excel using ATLAS.ti's export functions. By exporting these counts into a spreadsheet, researchers can then run further statistical analysis on their project. More complex statistical analyses can also be conducted by exporting the entire ATLAS.ti project and importing it into a statistical analysis software, such as SPSS or R.

A more holistic research inquiry can start with inductive research methods but should look at different approaches to research in order to fully understand a particular concept or phenomenon. You may want to collect data for deductive research to apply your theories developed through inductive reasoning to new information, or you may look at abductive reasoning to look at your object of inquiry in an entirely new way. Synthesizing your research with a quantitative approach may also be useful if you are looking to identify any statistical generalization in your research inquiry. Whatever your research, however, you can benefit from addressing your research questions from multiple angles.

Abductive reasoning

While the use of deductive versus inductive approaches in research is often discussed, abductive reasoning is the third type of reasoning that also warrants some attention.

Abductive reasoning can be seen as sitting between inductive and deductive forms of reasoning. Abduction involves developing an argument based on the information available in your data and then verifying or further elaborating on these inductive findings by referring to existing theories. Iterating between data and literature thus informs abductive analysis.

Employing quantitative research

Theories built on inductive reasoning can be followed up by quantitative research to confirm the research through statistical generalizations. Generally, any research that employs deductive reasoning can be used to support inductive inferences. Still, quantitative research at scale is useful in confirming the applicability of theory across large populations or multiple contexts.

Regardless of the reasoning or methodology employed, all good research has the capability of generating, strengthening, and extending theory when it incorporates sound, transparent analysis. ATLAS.ti can facilitate the analytical process of research by making the coding process faster and more intuitive so that researchers can spend more time critically reflecting on their analysis and developing theory.

why does qualitative research use inductive reasoning

Turn your analysis into insights with ATLAS.ti

Start with a free trial of our qualitative data analysis software.

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Part 2: Conceptualizing your research project

8. Reasoning and causality

Chapter outline.

  • Inductive and deductive reasoning (15 minute read)
  • Nomothetic causal relationships (17 minute read)
  • Idiographic causal relationships (12 minute read)
  • Mixed methods research (8 minute read)

Content warning: examples in this chapter include references to sexual harassment, domestic violence, gender-based violence, the child welfare system, substance use disorders, neonatal abstinence syndrome, child abuse, racism, and sexism.

8.1 Inductive and deductive reasoning

Learning objectives.

Learners will be able to…

  • Describe inductive and deductive reasoning and provide examples of each
  • Identify how inductive and deductive reasoning are complementary

Congratulations! You survived the chapter on theories and paradigms. My experience has been that many students have a difficult time thinking about theories and paradigms because they perceive them as “intangible” and thereby hard to connect to social work research. I even had one student who said she got frustrated just reading the word “philosophy.”

Rest assured, you do not need to become a theorist or philosopher to be an effective social worker or researcher. However, you should have a good sense of what theory or theories will be relevant to your project, as well as how this theory, along with your working question, fit within the three broad research paradigms we reviewed. If you don’t have a good idea about those at this point, it may be a good opportunity to pause and read more about the theories related to your topic area.

Theories structure and inform social work research. The converse is also true: research can structure and inform theory. The reciprocal relationship between theory and research often becomes evident to students when they consider the relationships between theory and research in inductive and deductive approaches to research. In both cases, theory is crucial. But the relationship between theory and research differs for each approach.

While inductive and deductive approaches to research are quite different, they can also be complementary. Let’s start by looking at each one and how they differ from one another. Then we’ll move on to thinking about how they complement one another.

Inductive reasoning

A researcher using inductive reasoning begins by collecting data that is relevant to their topic of interest. Once a substantial amount of data have been collected, the researcher will then step back from data collection to get a bird’s eye view of their data. At this stage, the researcher looks for patterns in the data, working to develop a theory that could explain those patterns. Thus, when researchers take an inductive approach, they start with a particular set of observations and move to a more general set of propositions about those experiences. In other words, they move from data to theory, or from the specific to the general. Figure 8.1 outlines the steps involved with an inductive approach to research.

A researcher moving from a more particular focus on data to a more general focus on theory by looking for patterns

There are many good examples of inductive research, but we’ll look at just a few here. One fascinating study in which the researchers took an inductive approach is Katherine Allen, Christine Kaestle, and Abbie Goldberg’s (2011) [1] study of how boys and young men learn about menstruation. To understand this process, Allen and her colleagues analyzed the written narratives of 23 young cisgender men in which the men described how they learned about menstruation, what they thought of it when they first learned about it, and what they think of it now. By looking for patterns across all 23 cisgender men’s narratives, the researchers were able to develop a general theory of how boys and young men learn about this aspect of girls’ and women’s biology. They conclude that sisters play an important role in boys’ early understanding of menstruation, that menstruation makes boys feel somewhat separated from girls, and that as they enter young adulthood and form romantic relationships, young men develop more mature attitudes about menstruation. Note how this study began with the data—men’s narratives of learning about menstruation—and worked to develop a theory.

In another inductive study, Kristin Ferguson and colleagues (Ferguson, Kim, & McCoy, 2011) [2] analyzed empirical data to better understand how to meet the needs of young people who are homeless. The authors analyzed focus group data from 20 youth at a homeless shelter. From these data they developed a set of recommendations for those interested in applied interventions that serve homeless youth. The researchers also developed hypotheses for others who might wish to conduct further investigation of the topic. Though Ferguson and her colleagues did not test their hypotheses, their study ends where most deductive investigations begin: with a theory and a hypothesis derived from that theory. Section 8.4 discusses the use of mixed methods research as a way for researchers to test hypotheses created in a previous component of the same research project.

You will notice from both of these examples that inductive reasoning is most commonly found in studies using qualitative methods, such as focus groups and interviews. Because inductive reasoning involves the creation of a new theory, researchers need very nuanced data on how the key concepts in their working question operate in the real world. Qualitative data is often drawn from lengthy interactions and observations with the individuals and phenomena under examination. For this reason, inductive reasoning is most often associated with qualitative methods, though it is used in both quantitative and qualitative research.

Deductive reasoning

If inductive reasoning is about creating theories from raw data, deductive reasoning is about testing theories using data. Researchers using deductive reasoning take the steps described earlier for inductive research and reverse their order. They start with a compelling social theory, create a hypothesis about how the world should work, collect raw data, and analyze whether their hypothesis was confirmed or not. That is, deductive approaches move from a more general level (theory) to a more specific (data); whereas inductive approaches move from the specific (data) to general (theory).

A deductive approach to research is the one that people typically associate with scientific investigation. Students in English-dominant countries that may be confused by inductive vs. deductive research can rest part of the blame on Sir Arthur Conan Doyle, creator of the Sherlock Holmes character. As Craig Vasey points out in his breezy introduction to logic book chapter , Sherlock Holmes more often used inductive rather than deductive reasoning (despite claiming to use the powers of deduction to solve crimes). By noticing subtle details in how people act, behave, and dress, Holmes finds patterns that others miss. Using those patterns, he creates a theory of how the crime occurred, dramatically revealed to the authorities just in time to arrest the suspect. Indeed, it is these flashes of insight into the patterns of data that make Holmes such a keen inductive reasoner. In social work practice, rather than detective work, inductive reasoning is supported by the intuitions and practice wisdom of social workers, just as Holmes’ reasoning is sharpened by his experience as a detective.

So, if deductive reasoning isn’t Sherlock Holmes’ observation and pattern-finding, how does it work? It starts with what you have already done in Chapters 3 and 4, reading and evaluating what others have done to study your topic. It continued with Chapter 5, discovering what theories already try to explain how the concepts in your working question operate in the real world. Tapping into this foundation of knowledge on their topic, the researcher studies what others have done, reads existing theories of whatever phenomenon they are studying, and then tests hypotheses that emerge from those theories. Figure 8.2 outlines the steps involved with a deductive approach to research.

Moving from general to specific using deductive reasoning

While not all researchers follow a deductive approach, many do. We’ll now take a look at a couple excellent recent examples of deductive research. 

In a study of US law enforcement responses to hate crimes, Ryan King and colleagues (King, Messner, & Baller, 2009) [3] hypothesized that law enforcement’s response would be less vigorous in areas of the country that had a stronger history of racial violence. The authors developed their hypothesis from prior research and theories on the topic. They tested the hypothesis by analyzing data on states’ lynching histories and hate crime responses. Overall, the authors found support for their hypothesis and illustrated an important application of critical race theory.

In another recent deductive study, Melissa Milkie and Catharine Warner (2011) [4] studied the effects of different classroom environments on first graders’ mental health. Based on prior research and theory, Milkie and Warner hypothesized that negative classroom features, such as a lack of basic supplies and heat, would be associated with emotional and behavioral problems in children. One might associate this research with Maslow’s hierarchy of needs or systems theory. The researchers found support for their hypothesis, demonstrating that policymakers should be paying more attention to the mental health outcomes of children’s school experiences, just as they track academic outcomes (American Sociological Association, 2011). [5]

Complementary approaches

While inductive and deductive approaches to research seem quite different, they can actually be rather complementary. In some cases, researchers will plan for their study to include multiple components, one inductive and the other deductive. In other cases, a researcher might begin a study with the plan to conduct either inductive or deductive research, but then discovers along the way that the other approach is needed to help illuminate findings. Here is an example of each such case.

Dr. Amy Blackstone (n.d.), author of Principles of sociological inquiry: Qualitative and quantitative methods , relates a story about her mixed methods research on sexual harassment.

We began the study knowing that we would like to take both a deductive and an inductive approach in our work. We therefore administered a quantitative survey, the responses to which we could analyze in order to test hypotheses, and also conducted qualitative interviews with a number of the survey participants. The survey data were well suited to a deductive approach; we could analyze those data to test hypotheses that were generated based on theories of harassment. The interview data were well suited to an inductive approach; we looked for patterns across the interviews and then tried to make sense of those patterns by theorizing about them. For one paper (Uggen & Blackstone, 2004) [6] , we began with a prominent feminist theory of the sexual harassment of adult women and developed a set of hypotheses outlining how we expected the theory to apply in the case of younger women’s and men’s harassment experiences. We then tested our hypotheses by analyzing the survey data. In general, we found support for the theory that posited that the current gender system, in which heteronormative men wield the most power in the workplace, explained workplace sexual harassment—not just of adult women but of younger women and men as well. In a more recent paper (Blackstone, Houle, & Uggen, 2006), [7] we did not hypothesize about what we might find but instead inductively analyzed interview data, looking for patterns that might tell us something about how or whether workers’ perceptions of harassment change as they age and gain workplace experience. From this analysis, we determined that workers’ perceptions of harassment did indeed shift as they gained experience and that their later definitions of harassment were more stringent than those they held during adolescence. Overall, our desire to understand young workers’ harassment experiences fully—in terms of their objective workplace experiences, their perceptions of those experiences, and their stories of their experiences—led us to adopt both deductive and inductive approaches in the work. (Blackstone, n.d., p. 21) [8]

Researchers may not always set out to employ both approaches in their work but sometimes find that their use of one approach leads them to the other. One such example is described eloquently in Russell Schutt’s  Investigating the Social World (2006). [9] As Schutt describes, researchers Sherman and Berk (1984) [10] conducted an experiment to test two competing theories of the effects of punishment on deterring deviance (in this case, domestic violence).Specifically, Sherman and Berk hypothesized that deterrence   theory (see Williams, 2005 [11] for more information on that theory) would provide a better explanation of the effects of arresting accused batterers than labeling theory . Deterrence theory predicts that arresting an accused spouse batterer will  reduce  future incidents of violence. Conversely, labeling theory predicts that arresting accused spouse batterers will  increase  future incidents (see Policastro & Payne, 2013 [12] for more information on that theory). Figure 8.3 summarizes the two competing theories and the hypotheses Sherman and Berk set out to test.

Deterrence theory predicts arrests lead to lower violence while labeling theory predicts higher violence

Research from these follow-up studies were mixed. In some cases, arrest deterred future incidents of violence. In other cases, it did not. This left the researchers with new data that they needed to explain. The researchers therefore took an inductive approach in an effort to make sense of their latest empirical observations. The new studies revealed that arrest seemed to have a deterrent effect for those who were married and employed, but that it led to increased offenses for those who were unmarried and unemployed. Researchers thus turned to control theory, which posits that having some stake in conformity through the social ties provided by marriage and employment, as the better explanation (see Davis et al., 2000 [14] for more information on this theory).

Predictions of control theory on incidents of domestic violence

What the original Sherman and Berk study, along with the follow-up studies, show us is that we might start with a deductive approach to research, but then, if confronted by new data we must make sense of, we may move to an inductive approach. We will expand on these possibilities in section 8.4 when we discuss mixed methods research.

Ethical and critical considerations

Deductive and inductive reasoning, just like other components of the research process comes with ethical and cultural considerations for researchers. Specifically, deductive research is limited by existing theory. Because scientific inquiry has been shaped by oppressive forces such as sexism, racism, and colonialism, what is considered theory is largely based in Western, white-male-dominant culture. Thus, researchers doing deductive research may artificially limit themselves to ideas that were derived from this context. Non-Western researchers, international social workers, and practitioners working with non-dominant groups may find deductive reasoning of limited help if theories do not adequately describe other cultures.

While these flaws in deductive research may make inductive reasoning seem more appealing, on closer inspection you’ll find similar issues apply. A researcher using inductive reasoning applies their intuition and lived experience when analyzing participant data. They will take note of particular themes, conceptualize their definition, and frame the project using their unique psychology. Since everyone’s internal world is shaped by their cultural and environmental context, inductive reasoning conducted by Western researchers may unintentionally reinforcing lines of inquiry that derive from cultural oppression.

Inductive reasoning is also shaped by those invited to provide the data to be analyzed. For example, I recently worked with a student who wanted to understand the impact of child welfare supervision on children born dependent on opiates and methamphetamine. Due to the potential harm that could come from interviewing families and children who are in foster care or under child welfare supervision, the researcher decided to use inductive reasoning and to only interview child welfare workers.

Talking to practitioners is a good idea for feasibility, as they are less vulnerable than clients. However, any theory that emerges out of these observations will be substantially limited, as it would be devoid of the perspectives of parents, children, and other community members who could provide a more comprehensive picture of the impact of child welfare involvement on children. Notice that each of these groups has less power than child welfare workers in the service relationship. Attending to which groups were used to inform the creation of a theory and the power of those groups is an important critical consideration for social work researchers.

As you can see, when researchers apply theory to research they must wrestle with the history and hierarchy around knowledge creation in that area. In deductive studies, the researcher is positioned as the expert, similar to the positivist paradigm presented in Chapter 5. We’ve discussed a few of the limitations on the knowledge of researchers in this subsection, but the position of the “researcher as expert” is inherently problematic. However, it should also not be taken to an extreme. A researcher who approaches inductive inquiry as a naïve learner is also inherently problematic. Just as competence in social work practice requires a baseline of knowledge prior to entering practice, so does competence in social work research. Because a truly naïve intellectual position is impossible—we all have preexisting ways we view the world and are not fully aware of how they may impact our thoughts—researchers should be well-read in the topic area of their research study but humble enough to know that there is always much more to learn.

Key Takeaways

  • Inductive reasoning begins with a set of empirical observations, seeking patterns in those observations, and then theorizing about those patterns.
  • Deductive reasoning begins with a theory, developing hypotheses from that theory, and then collecting and analyzing data to test the truth of those hypotheses.
  • Inductive and deductive reasoning can be employed together for a more complete understanding of the research topic.
  • Though researchers don’t always set out to use both inductive and deductive reasoning in their work, they sometimes find that new questions arise in the course of an investigation that can best be answered by employing both approaches.
  • Identify one theory and how it helps you understand your topic and working question.

I encourage you to find a specific theory from your topic area, rather than relying only on the broad theoretical perspectives like systems theory or the strengths perspective. Those broad theoretical perspectives are okay…but I promise that searching for theories about your topic will help you conceptualize and design your research project.

  • Using the theory you identified, describe what you expect the answer to be to your working question.

8.2 Nomothetic causal explanations

  • Define and provide an example of idiographic causal relationships
  • Describe the role of causality in quantitative research as compared to qualitative research
  • Identify, define, and describe each of the main criteria for nomothetic causal relationships
  • Describe the difference between and provide examples of independent, dependent, and control variables
  • Define hypothesis, state a clear hypothesis, and discuss the respective roles of quantitative and qualitative research when it comes to hypotheses

Causality  refers to the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief. In other words, it is about cause and effect. It seems simple, but you may be surprised to learn there is more than one way to explain how one thing causes another. How can that be? How could there be many ways to understand causality?

Think back to our discussion in Section 5.3 on paradigms [insert chapter link plus link to section 1.2]. You’ll remember the positivist paradigm as the one that believes in objectivity. Positivists look for causal explanations that are universally true for everyone, everywhere  because they seek objective truth. Interpretivists, on the other hand, look for causal explanations that are true for individuals or groups in a specific time and place because they seek subjective truths. Remember that for interpretivists, there is not one singular truth that is true for everyone, but many truths created and shared by others.

“Are you trying to generalize or nah?”

One of my favorite classroom moments occurred in the early days of my teaching career. Students were providing peer feedback on their working questions. I overheard one group who was helping someone rephrase their research question. A student asked, “Are you trying to generalize or nah?” Teaching is full of fun moments like that one. Answering that one question can help you understand how to conceptualize and design your research project.

Nomothetic causal explanations are incredibly powerful. They allow scientists to make predictions about what will happen in the future, with a certain margin of error. Moreover, they allow scientists to generalize —that is, make claims about a large population based on a smaller sample of people or items. Generalizing is important. We clearly do not have time to ask everyone their opinion on a topic or test a new intervention on every person. We need a type of causal explanation that helps us predict and estimate truth in all situations.

Generally, nomothetic causal relationships work best for explanatory research projects [INSERT SECTION LINK]. They also tend to use quantitative research: by boiling things down to numbers, one can use the universal language of mathematics to use statistics to explore those relationships. On the other hand, descriptive and exploratory projects often fit better with idiographic causality. These projects do not usually try to generalize, but instead investigate what is true for individuals, small groups, or communities at a specific point in time. You will learn about this type of causality in the next section. Here, we will assume you have an explanatory working question. For example, you may want to know about the risk and protective factors for a specific diagnosis or how a specific therapy impacts client outcomes.

What do nomothetic causal explanations look like?

Nomothetic causal explanations express relationships between variables . The term variable has a scientific definition. This one from Gillespie & Wagner (2018) “a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population” (p. 9). [15] More practically, variables are the key concepts in your working question. You know, the things you plan to observe when you actually do your research project, conduct your surveys, complete your interviews, etc. These things have two key properties. First, they vary , as in they do not remain constant. “Age” varies by number. “Gender” varies by category. But they both vary. Second, they have attributes . So the variable “health professions” has attributes or categories, such as social worker, nurse, counselor, etc.

It’s also worth reviewing what is  not a variable. Well, things that don’t change (or vary) aren’t variables. If you planned to do a study on how gender impacts earnings but your study only contained women, that concept would not vary . Instead, it would be a constant . Another common mistake I see in students’ explanatory questions is mistaking an attribute for a variable. “Men” is not a variable. “Gender” is a variable. “Virginia” is not a variable. The variable is the “state or territory” in which someone or something is physically located.

When one variable causes another, we have what researchers call independent and dependent variables. For example, in a study investigating the impact of spanking on aggressive behavior, spanking would be the independent variable and aggressive behavior would be the dependent variable. An independent variable is the cause, and a  dependent variable  is the effect. Why are they called that? Dependent variables  depend on independent variables. If all of that gets confusing, just remember the graphical relationship in Figure 8.5.

The letters IV on the left side with an arrow pointing to the letters DV on the right

Write out your working question, as it exists now. As we said previously in the subsection, we assume you have an explanatory research question for learning this section.

  • Write out a diagram similar to Figure 8.5.
  • Put your independent variable on the left and the dependent variable on the right.
  • Can your variables vary?
  • Do they have different attributes or categories that vary from person to person?
  • How does the theory you identified in section 8.1 help you understand this causal relationship?

If the theory you’ve identified isn’t much help to you or seems unrelated, it’s a good indication that you need to read more literature about the theories related to your topic.

For some students, your working question may not be specific enough to list an independent or dependent variable clearly. You may have “risk factors” in place of an independent variable, for example. Or “effects” as a dependent variable. If that applies to your research question, get specific for a minute even if you have to revise this later. Think about which specific risk factors or effects you are interested in. Consider a few options for your independent and dependent variable and create diagrams similar to Figure 8.5.

Finally, you are likely to revisit your working question so you may have to come back to this exercise to clarify the causal relationship you want to investigate.

For a ten-cent word like “nomothetic,” these causal relationships should look pretty basic to you. They should look like “x causes y.” Indeed, you may be looking at your causal explanation and thinking, “wow, there are so many other things I’m missing in here.” In fact, maybe my dependent variable sometimes causes changes in my independent variable! For example, a working question asking about poverty and education might ask how poverty makes it more difficult to graduate college or how high college debt impacts income inequality after graduation. Nomothetic causal relationships are slices of reality. They boil things down to two (or often more) key variables and assert a one-way causal explanation between them. This is by design, as they are trying to generalize across all people to all situations. The more complicated, circular, and often contradictory causal explanations are idiographic, which we will cover in the next section of this chapter.

Developing a hypothesis

A hypothesis   is a statement describing a researcher’s expectation regarding what they anticipate finding. Hypotheses in quantitative research are a nomothetic causal relationship that the researcher expects to determine is true or false. A hypothesis is written to describe the expected relationship between the independent and dependent variables. In other words, write the answer to your working question using your variables. That’s your hypothesis! Make sure you haven’t introduced new variables into your hypothesis that are not in your research question. If you have, write out your hypothesis as in Figure 8.5.

A good hypothesis should be testable using social science research methods. That is, you can use a social science research project (like a survey or experiment) to test whether it is true or not. A good hypothesis is also  specific about the relationship it explores. For example, a student project that hypothesizes, “families involved with child welfare agencies will benefit from Early Intervention programs,” is not specific about what benefits it plans to investigate. For this student, I advised her to take a look at the empirical literature and theory about Early Intervention and see what outcomes are associated with these programs. This way, she could  more clearly state the dependent variable in her hypothesis, perhaps looking at reunification, attachment, or developmental milestone achievement in children and families under child welfare supervision.

Your hypothesis should be an informed prediction based on a theory or model of the social world. For example, you may hypothesize that treating mental health clients with warmth and positive regard is likely to help them achieve their therapeutic goals. That hypothesis would be based on the humanistic practice models of Carl Rogers. Using previous theories to generate hypotheses is an example of deductive research. If Rogers’ theory of unconditional positive regard is accurate, a study comparing clinicians who used it versus those who did not would show more favorable treatment outcomes for clients receiving unconditional positive regard.

Let’s consider a couple of examples. In research on sexual harassment (Uggen & Blackstone, 2004), [16] one might hypothesize, based on feminist theories of sexual harassment, that more females than males will experience specific sexually harassing behaviors. What is the causal relationship being predicted here? Which is the independent and which is the dependent variable? In this case, researchers hypothesized that a person’s sex (independent variable) would predict their likelihood to experience sexual harassment (dependent variable).

Hypothesis describing a causal relationship between sex and sexual harassment

Sometimes researchers will hypothesize that a relationship will take a specific direction. As a result, an increase or decrease in one area might be said to cause an increase or decrease in another. For example, you might choose to study the relationship between age and support for legalization of marijuana. Perhaps you’ve taken a sociology class and, based on the theories you’ve read, you hypothesize that age is negatively related to support for marijuana legalization. [17] What have you just hypothesized?

You have hypothesized that as people get older, the likelihood of their supporting marijuana legalization decreases. Thus, as age (your independent variable) moves in one direction (up), support for marijuana legalization (your dependent variable) moves in another direction (down). So, a direct relationship (or positive correlation) involve two variables going in the same direction and an inverse relationship (or negative correlation) involve two variables going in opposite directions. If writing hypotheses feels tricky, it is sometimes helpful to draw them out and depict each of the two hypotheses we have just discussed.

As age increases, support for marijuana legalization decreases

It’s important to note that once a study starts, it is unethical to change your hypothesis to match the data you find. For example, what happens if you conduct a study to test the hypothesis from Figure 8.7 on support for marijuana legalization, but you find no relationship between age and support for legalization? It means that your hypothesis was incorrect, but that’s still valuable information. It would challenge what the existing literature says on your topic, demonstrating that more research needs to be done to figure out the factors that impact support for marijuana legalization. Don’t be embarrassed by negative results, and definitely don’t change your hypothesis to make it appear correct all along!

Criteria for establishing a nomothetic causal relationship

Let’s say you conduct your study and you find evidence that supports your hypothesis, as age increases, support for marijuana legalization decreases. Success! Causal explanation complete, right? Not quite.

You’ve only established one of the criteria for causality. The criteria for causality must include all of the following: covariation, plausibility, temporality, and nonspuriousness. In our example from Figure 8.7, we have established only one criteria—covariation. When variables covary , they vary together. Both age and support for marijuana legalization vary in our study. Our sample contains people of varying ages and varying levels of support for marijuana legalization. If, for example, we only included 16-year-olds in our study, age would be a  constant , not a variable.

Just because there might be some correlation between two variables does not mean that a causal relationship between the two is really plausible. Plausibility means that in order to make the claim that one event, behavior, or belief causes another, the claim has to make sense. It makes sense that people from previous generations would have different attitudes towards marijuana than younger generations. People who grew up in the time of Reefer Madness or the hippies may hold different views than those raised in an era of legalized medicinal and recreational use of marijuana. Plausibility is of course helped by basing your causal explanation in existing theoretical and empirical findings.

Once we’ve established that there is a plausible relationship between the two variables, we also need to establish whether the cause occurred before the effect, the criterion of temporality . A person’s age is a quality that appears long before any opinions on drug policy, so temporally the cause comes before the effect. It wouldn’t make any sense to say that support for marijuana legalization makes a person’s age increase. Even if you could predict someone’s age based on their support for marijuana legalization, you couldn’t say someone’s age was caused by their support for legalization of marijuana.

Finally, scientists must establish nonspuriousness. A spurious relationship is one in which an association between two variables appears to be causal but can in fact be explained by some third variable. This third variable is often called a confound or confounding variable because it clouds and confuses the relationship between your independent and dependent variable, making it difficult to discern the true causal relationship is.

a joke about correlation and causation

Continuing with our example, we could point to the fact that older adults are less likely to have used marijuana recreationally. Maybe it is actually recreational use of marijuana that leads people to be more open to legalization, not their age. In this case, our confounding variable would be recreational marijuana use. Perhaps the relationship between age and attitudes towards legalization is a spurious relationship that is accounted for by previous use. This is also referred to as the third variable problem , where a seemingly true causal relationship is actually caused by a third variable not in the hypothesis. In this example, the relationship between age and support for legalization could be more about having tried marijuana than the age of the person.

Quantitative researchers are sensitive to the effects of potentially spurious relationships. As a result, they will often measure these third variables in their study, so they can control for their effects in their statistical analysis. These are called  control variables , and they refer to potentially confounding variables whose effects are controlled for mathematically in the data analysis process. Control variables can be a bit confusing, and we will discuss them more in Chapter 10, but think about it as an argument between you, the researcher, and a critic.

Researcher: “The older a person is, the less likely they are to support marijuana legalization.” Critic: “Actually, it’s more about whether a person has used marijuana before. That is what truly determines whether someone supports marijuana legalization.” Researcher: “Well, I measured previous marijuana use in my study and mathematically controlled for its effects in my analysis. Age explains most of the variation in attitudes towards marijuana legalization.”

Let’s consider a few additional, real-world examples of spuriousness. Did you know, for example, that high rates of ice cream sales have been shown to cause drowning? Of course, that’s not really true, but there is a positive relationship between the two. In this case, the third variable that causes both high ice cream sales and increased deaths by drowning is time of year, as the summer season sees increases in both (Babbie, 2010). [18]

Here’s another good one: it is true that as the salaries of Presbyterian ministers in Massachusetts rise, so too does the price of rum in Havana, Cuba. Well, duh, you might be saying to yourself. Everyone knows how much ministers in Massachusetts love their rum, right? Not so fast. Both salaries and rum prices have increased, true, but so has the price of just about everything else (Huff & Geis, 1993). [19]

Finally, research shows that the more firefighters present at a fire, the more damage is done at the scene. What this statement leaves out, of course, is that as the size of a fire increases so too does the amount of damage caused as does the number of firefighters called on to help (Frankfort-Nachmias & Leon-Guerrero, 2011). [20] In each of these examples, it is the presence of a confounding variable that explains the apparent relationship between the two original variables.

In sum, the following criteria must be met for a nomothetic causal relationship:

  • The two variables must vary together.
  • The relationship must be plausible.
  • The cause must precede the effect in time.
  • The relationship must be nonspurious (not due to a confounding variable).

The hypothetico-dedutive method

The primary way that researchers in the positivist paradigm use theories is sometimes called the hypothetico-deductive method (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers choose an existing theory. Then, they make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary.

This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As Figure 8.8 shows, this approach meshes nicely with the process of conducting a research project—creating a more detailed model of “theoretically motivated” or “theory-driven” research. Together, they form a model of theoretically motivated research. 

why does qualitative research use inductive reasoning

Keep in mind the hypothetico-deductive method is only one way of using social theory to inform social science research. It starts with describing one or more existing theories, deriving a hypothesis from one of those theories, testing your hypothesis in a new study, and finally reevaluating the theory based on the results data analyses. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

But what if your research question is more interpretive? What if it is less about theory-testing and more about theory-building? This is what our next chapter covers: the process of inductively deriving theory from people’s stories and experiences. This process looks different than that depicted in Figure 8.8. It still starts with your research question and answering that question by conducting a research study. But instead of testing a hypothesis you created based on a theory, you will create a theory of your own that explain the data you collected. This format works well for qualitative research questions and for research questions that existing theories do not address.

  • In positivist and quantitative studies, the goal is often to understand the more general causes of some phenomenon rather than the idiosyncrasies of one particular instance, as in an idiographic causal relationship.
  • Nomothetic causal explanations focus on objectivity, prediction, and generalization.
  • Criteria for nomothetic causal relationships require the relationship be plausible and nonspurious; and that the cause must precede the effect in time.
  • In a nomothetic causal relationship, the independent variable causes changes in the dependent variable.
  • Hypotheses are statements, drawn from theory, which describe a researcher’s expectation about a relationship between two or more variables.
  • Write out your working question and hypothesis.
  • Defend your hypothesis in a short paragraph, using arguments based on the theory you identified in section 8.1.
  • Review the criteria for a nomothetic causal relationship. Critique your short paragraph about your hypothesis using these criteria.
  • Are there potentially confounding variables, issues with time order, or other problems you can identify in your reasoning?

8.3 Idiographic causal relationships

  • Define and provide an example of an idiographic causal explanation
  • Differentiate between idiographic and nomothetic causal relationships
  • Link idiographic and nomothetic causal relationships with the process of theory building and theory testing
  • Describe how idiographic and nomothetic causal explanations can be complementary

We began the previous section with a definition of causality, or the idea that “one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief.” Then, we described one kind of causality: a simple cause-and-effect relationship supported by existing theory and research on the topic, also known as a nomothetic causal relationship. But what if there is not a lot of literature on your topic? What if your question is more exploratory than explanatory? Then, you need a different kind of causal explanation, one that accounts for the complexity of human interactions.

How can we build causal relationships if we are just describing or exploring a topic? Recall the definitions of exploratory research , descriptive research , and explanatory research from Chapter 2. Wouldn’t we need to do explanatory research to build any kind of causal explanation? Explanatory research attempts to establish nomothetic causal relationships: an independent variable is demonstrated to cause change in a dependent variable. Exploratory and descriptive qualitative research contains some causal relationships, but they are actually descriptions of the causal relationships established by the study participants.

What do idiographic causal explanations look like?

An idiographic causal relationship   tries to identify the many, interrelated causes that account for the phenomenon the researcher is investigating. So, if idiographic causal explanations do not look like Figure 8.5, 8.6, or 8.7 what do they look like? Instead of saying “x causes y,” your participants will describe their experiences with “x,” which they will tell you was caused and influenced by a variety of other factors, as interpreted through their unique perspective, time, and environment. As we stated before, idiographic causal explanations are messy. Your job as a social science researcher is to accurately describe the patterns in what your participants tell you.

Let’s think about this using an example. If I asked you why you decided to become a social worker, what might you say? For me, I would say that I wanted to be a mental health clinician since I was in high school. I was interested in how people thought, and I was privileged enough to have psychology courses at my local high school. I thought I wanted to be a psychologist, but at my second internship in my undergraduate program, my supervisors advised me to become a social worker because the license provided greater authority for independent practice and flexibility for career change. Once I found out social workers were like psychologists who also raised trouble about social justice, I was hooked.

That’s not a simple explanation at all! But it’s definitely a causal explanation. It is my individual, subjective truth of a complex process. If we were to ask multiple social workers the same question, we might find out that many social workers begin their careers based on factors like personal experience with a disability or social injustice, positive experiences with social workers, or a desire to help others. No one factor is the “most important factor,” like with nomothetic causal relationships. Instead, a complex web of factors, contingent on context, emerge when you interpret what people tell you about their lives.

Understanding “why?”

In creating an idiographic explanation, you are still asking “why?” But the answer is going to be more complex. Those complexities are described in Table 8.1 as well as this short video comparing nomothetic and idiographic relationships .

Table 8.1: Comparing nomothetic and idiographic causal relationships
Nomothetic causal relationships Idiographic causal relationships
Paradigm Positivist Interpretivist
Purpose of research Prediction & generalization Understanding & particularity
Reasoning Deductive Inductive
Purpose of research Explanatory Exploratory or descriptive
Research methods Quantitative Qualitative
Causality Simple: cause and effect Complex: context-dependent, sometimes circular or contradictory
Role of theory Theory testing Theory building

Remember our question from the last section, “Are you trying to generalize or nah?” If you answered nah (or no, like a normal person), you are trying to establish an idiographic causal explanation. The purpose of that explanation isn’t to predict the future or generalize to larger populations, but to describe the here-and-now as it is experienced by individuals within small groups and communities. Idiographic explanations are focused less on what is generally experienced by all people but more on the particularities of what specific individuals in a unique time and place experience.

Researchers seeking idiographic causal relationships are not trying to generalize or predict, so they have no need to reduce phenomena to mathematics. In fact, only examining things that can be counted can rob a causal relationship of its meaning and context. Instead, the goal of idiographic causal relationships is understanding, rather than prediction. Idiographic causal relationships are formed by interpreting people’s stories and experiences. Usually, these are expressed through words. Not all qualitative studies use word data, as some can use interpretations of visual or performance art. However, the vast majority of qualitative studies do use word data, like the transcripts from interviews and focus groups or documents like journal entries or meeting notes. Your participants are the experts on their lives—much like in social work practice—and as in practice, people’s experiences are embedded in their cultural, historical, and environmental context.

Idiographic causal explanations are powerful because they can describe the complicated and interconnected nature of human life. Nomothetic causal explanations, by comparison, are simplistic. Think about if someone asked you why you wanted to be a social worker. Your story might include a couple of vignettes from your education and early employment. It might include personal experience with the social welfare system or family traditions. Maybe you decided on a whim to enroll in a social work course during your graduate program. The impact of each of these events on your career is unique to you.

Idiographic causal explanations are concerned with individual stories, their idiosyncrasies, and the patterns that emerge when you collect and analyze multiple people’s stories. This is the inductive reasoning we discussed at the beginning of this chapter. Often, idiographic causal explanations begin by collecting a lot of qualitative data, whether though interviews, focus groups, or looking at available documents or cultural artifacts. Next, the researcher looks for patterns in the data and arrives at a tentative theory for how the key ideas in people’s stories are causally related.

Unlike nomothetic causal relationships, there are no formal criteria (e.g., covariation) for establishing causality in idiographic causal relationships. In fact, some criteria like temporality and nonspuriousness may be violated. For example, if an adolescent client says, “It’s hard for me to tell whether my depression began before my drinking, but both got worse when I was expelled from my first high school,” they are recognizing that it may not so simple that one thing causes another. Sometimes, there is a reciprocal relationship where one variable (depression) impacts another (alcohol abuse), which then feeds back into the first variable (depression) and into other variables as well (school). Other criteria, such as covariation and plausibility, still make sense, as the relationships you highlight as part of your idiographic causal explanation should still be plausible and its elements should vary together.

Theory building and theory testing

As we learned in the previous section, nomothetic causal explanations are created by researchers applying deductive reasoning to their topic and creating hypotheses using social science theories. Much of what we think of as social science is based on this hypothetico-deductive method, but this leaves out the other half of the equation. Where do theories come from? Are they all just revisions of one another? How do any new ideas enter social science?

Through inductive reasoning and idiographic causal explanations!

Let’s consider a social work example. If you plan to study domestic and sexual violence, you will likely encounter the Power and Control Wheel, also known as the Duluth Model (Figure 8.9). The wheel is a model designed to depict the process of domestic violence. The wheel was developed based on qualitative focus groups conducted by sexual and domestic violence advocates in Duluth, MN. This video explains more about the Duluth Model of domestic abuse.

Power and control wheel indicating the factors like

The Power and Control Wheel is an example of what an idiographic causal relationship looks like. By contrast, look back at the previous section’s Figure 8.5, 8.6, and 8.7 on nomothetic causal relationships between independent and dependent variables. See how much more complex idiographic causal explanations are?! They are complex, but not difficult to understand. At the center of domestic abuse is power and control, and while not every abuser would say that is what they were doing, that is the understanding of the survivors who informed this theoretical model. Their power and control is maintained through a variety of abusive tactics from social isolation to use of privilege to avoid consequences.

What about the role of hypotheses in idiographic causal explanations? In nomothetic causal explanations, researchers create hypotheses using existing theory and then test them for accuracy. Hypotheses in idiographic causality are much more tentative, and are probably best considered as “hunches” about what they think might be true. Importantly, they might indicate the researcher’s prior knowledge and biases before the project begins, but the goal of idiographic research is to let your participants guide you rather than existing social work knowledge. Continuing with our Duluth Model example, advocates likely had some tentative hypotheses about what was important in a relationship with domestic violence. After all, they worked with this population for years prior to the creation of the model. However, it was the stories of the participants in these focus groups that led the Power and Control Wheel explanation for domestic abuse.

As qualitative inquiry unfolds, hypotheses and hunches are likely to emerge and shift as researchers learn from what their participants share. Because the participants are the experts in idiographic causal relationships, a researcher should be open to emerging topics and shift their research questions and hypotheses accordingly. This is in contrast to hypotheses in quantitative research, which remain constant throughout the study and are shown to be true or false.

Over time, as more qualitative studies are done and patterns emerge across different studies and locations, more sophisticated theories emerge that explain phenomena across multiple contexts. Once a theory is developed from qualitative studies, a quantitative researcher can seek to test that theory. For example, a quantitative researcher may hypothesize that men who hold traditional gender roles are more likely to engage in domestic violence. That would make sense based on the Power and Control Wheel model, as the category of “using male privilege” speaks to this relationship. In this way, qualitatively-derived theory can inspire a hypothesis for a quantitative research project, as we will explore in the next section.

If idiographic and nomothetic still seem like obscure philosophy terms, let’s consider another example. Imagine you are working for a community-based non-profit agency serving people with disabilities. You are putting together a report to lobby the state government for additional funding for community support programs. As part of that lobbying, you are likely to rely on both nomothetic and idiographic causal relationships.

If you looked at nomothetic causal relationships, you might learn how previous studies have shown that, in general, community-based programs like yours are linked with better health and employment outcomes for people with disabilities. Nomothetic causal explanations seek to establish that community-based programs are better for everyone with disabilities, including people in your community.

If you looked at idiographic causal explanations, you would use stories and experiences of people in community-based programs. These individual stories are full of detail about the lived experience of being in a community-based program. You might use one story from a client in your lobbying campaign, so policymakers can understand the lived experience of what it’s like to be a person with a disability in this program. For example, a client who said “I feel at home when I’m at this agency because they treat me like a family member,” or “this is the agency that helped me get my first paycheck,” can communicate richer, more complex causal relationships.

Neither kind of causal explanation is better than the other. A decision to seek idiographic causal explanations means that you will attempt to explain or describe your phenomenon exhaustively, attending to cultural context and subjective interpretations. A decision to seek nomothetic causal explanations, on the other hand, means that you will try to explain what is true for everyone and predict what will be true in the future. In short, idiographic explanations have greater depth, and nomothetic explanations have greater breadth.

Most importantly, social workers understand the value of both approaches to understanding the social world. A social worker helping a client with substance abuse issues seeks idiographic explanations when they ask about that client’s life story, investigate their unique physical environment, or probe how their family relationships. At the same time, a social worker also uses nomothetic explanations to guide their interventions. Nomothetic explanations may help guide them to minimize risk factors and maximize protective factors or use an evidence-based therapy, relying on knowledge about what in general  helps people with substance abuse issues.

So, which approach speaks to you? Are you interested in learning about (a) a few people’s experiences in a great deal of depth, or (b) a lot of people’s experiences more superficially, while also hoping your findings can be generalized to a greater number of people? The answer to this question will drive your research question and project. These approaches provide different types of information and both types are valuable.

  • Idiographic causal explanations focus on subjectivity, context, and meaning.
  • Idiographic causal explanations are best suited to exploratory research questions and qualitative methods.
  • Idiographic causal explanations are used to create new theories in social science.
  • Explore the literature on the theory you identified in section 8.1.
  • Read about the origins of your theory. Who developed it and from what data?
  • See if you can find a figure like Figure 8.9 in an article or book chapter that depicts the key concepts in your theory and how those concepts are related to one another causally. Write out a short statement on the causal relationships contained in the figure.

8.4 Mixed methods research

  • Define sequence and emphasis and describe how they work in qualitative research
  • List five reasons why researchers use mixed methods

As we discussed in the previous sections, while we contrast idiographic vs. nomothetic causality or inductive vs. deductive reasoning, the truth is that researchers combine both of these approaches when they conduct research. While these processes can occur in any kind of study, mixed methods research is an excellent example of how researchers use both approaches to logic and reasoning to improve understanding of a given topic.

So far in this textbook, we have talked about quantitative and qualitative methods as an either/or choice—you can choose quantitative methods or qualitative methods. However, researchers often use both quantitative methods inside of their research projects. This is called mixed methods research .

For example, I recently completed a study with administrators of state-level services for people with intellectual and developmental disabilities. They implemented a program called self-direction, which allows people with disabilities greater self-determination over their supports. In this study, my research partners and I used a mixed methods approach to describe the implementation of self-direction across the United States. We distributed a short, electronic questionnaire and conducted phone interviews with program administrators. While we could have just sent out a questionnaire that asked states to provide basic information on their program (size, qualifications, services offered, etc.), that would not provide us much information about some of the issues administrators faced during program implementation. Similarly, we could have interviewed program administrators without the questionnaire, but then we wouldn’t know enough about the programs to ask good questions. Instead, we chose to use both qualitative and quantitative methods to provide a more comprehensive picture of program implementation. 

Sequence and emphasis

There are many different mixed methods designs, each with their own strengths and limitations (see Creswell & Clarke, 2017 [21] for a more thorough introduction). However, a more simplified synthesis of mixed methods approaches is provided by Engel and Schutt (2016) [22] using two key terms. Sequence  refers to the order that each method is used. Researchers can use both methods at the same time or  concurrently . Or, they can use one and then the other, or  sequentially .

For our study of self-direction, we used a sequential design by sending out a questionnaire first, conducting some analysis, and then conducting the interview. We used the quantitative questionnaire to gather basic information about the programs before we began the interviews, so our questions were specific to the features of each program. If we wanted to use a concurrent design for some reason, we could have asked quantitative questions during the interview. However, we felt this would waste the administrators’ time looking up information and would break up rhythm of the interviews.

The other key term in mixed methods research is emphasis . In our mixed methods study, the qualitative data was the most important data. The quantitative data was mainly used to provide background information for the qualitative interviews, and our study write up focused mostly on the qualitative information. Thus, qualitative methods were prioritized in our study. Of course, many other studies emphasize quantitative methods over qualitative methods. In these studies, qualitative data is used mainly to provide context for the quantitative findings.

For example, demonstrating quantitatively that a particular therapy works is important. By adding a qualitative component, researchers could find out how the participants experienced the intervention, how clients understood the therapy’s effects, and the meaning the therapy had on their lives. This data would add depth and context to the findings and allow researchers to improve the therapeutic technique in the future.

A similar practice is when researchers use qualitative methods to solicit feedback on a quantitative scale or measure. The experiences of individuals allow researchers to refine the measure before they conduct the quantitative component of their study. Finally, it is possible that researchers are equally interested in qualitative and quantitative information. In studies of equal emphasis , researchers consider both methods as the focus of the research project.

Why researchers use mixed methods

Mixed methods research is more than just sticking an open-ended question at the end of a quantitative survey. Mixed methods researchers use mixed methods for both pragmatic and synergistic reasons. That is, they use both methods because it makes sense with their research questions and because they will get the answers they want by combining the two approaches.

Mixed methods also allow you to use both inductive and deductive reasoning. As we’ve discussed, qualitative research follows inductive logic, moving from data to empirical generalizations or theory. In a mixed methods study, a researcher could use the results from a qualitative component to create a theory that could be tested in a subsequent quantitative component. The quantitative component would use deductive logic, using the theory derived from qualitative data to create and test a hypothesis. In this way, mixed methods use the strengths of both inductive and deductive reasoning. Quantitative allows the researcher to test existing ideas. Qualitative allows the researcher to create new ideas.

With these two concepts in mind, we can start to see why researchers use mixed methods in the real world. I mentioned previously that our research project used a sequential design because we wanted to use our quantitative data to shape what qualitative questions we asked our participants. Mixed methods are often used this way, to initiate ideas with one method to study with another. For example, researchers could begin a mixed methods project by using qualitative methods to interview or conduct a focus group with participants. Based on their responses, the researchers could then formulate a survey to give out to a larger group of people to see how common the themes from the focus groups were. This is the inverse of what we did in our project, which was to use a quantitative survey to inform a more detailed qualitative interview.

In addition to providing information for subsequent investigation, using both quantitative and qualitative information provides additional context for the data. For example, in our questionnaire for the study on self-direction, we asked participants to list what services people could purchase. The qualitative data followed up on that answer by asking whether the administrators had added or taken away any services, how they decided that these services would be covered and not others, and what problems that arose around providing these services. With that information, we could analyze what services were offered, why they were offered, and how administrators made those decisions. In this way, we learned the lived experience of program administrators, not just the basic information about their programs.

Finally, another purpose of mixed methods research is to corroborate data from both quantitative and qualitative sources. Ideally, your qualitative and quantitative results should support each other. For example, if interviews with participants showed a relationship between two concepts, that relationship should also be present in the qualitative data you collected. Differences between quantitative and qualitative data require an explanation. Perhaps there are outliers or extreme cases that pushed your data in one direction, for example.

In summary, these are a few of the many reasons researchers use mixed methods. They are summarized below:

  • Triangulation, or convergence on the same phenomenon to improve validity
  • Complementarity, which aims to get at related but different facets of a phenomenon
  • Development, or the use of results from one phase or a study to develop another phase
  • Initiation, or the intentional analysis of inconsistent qualitative and quantitative findings to derive new insights
  • Expansion, or using multiple components to extend the scope of a study (Burnett, 2012, p. 77). [23]

A word of caution

The use of mixed methods has many advantages. However, undergraduate researchers should approach mixed methods with caution. Conducting a mixed methods study may mean doubling or even tripling your work. You must conceptualize a component using quantitative methods, another using qualitative methods, and think about how they fit together. This may mean creating a questionnaire, then writing an interview guide, and thinking through how the data on each measure relate to one another—more work than using one quantitative or qualitative method alone. Similarly, in sequential studies, the researcher must collect and analyze data from one component and then conceptualize and conduct the second component. This may also impact how long a project may take. Before beginning a mixed methods project, you should have a clear vision for what the project will entail and how each methodology will contribute to that vision. Always remember that you should make your project feasible enough for you to conduct with the time, money, and other resources you have at your disposal right now.

  • Mixed methods studies vary in sequence and emphasis.
  • Mixed methods allow the research to corroborate findings, provide context, follow up on ideas, and use the strengths of each method.
  • Look at the literature on your topic, and see if you can find a study that uses mixed methods.
  • Describe the sequence and importance that the researchers places on the quantitative and qualitative components.
  • Identify why the researchers used mixed methods and how the project would have been different had researchers used only one (qualitative or quantitative) component.

Media Attributions

  • Inductive reasoning © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Figure 6.2: Deductive reasoning © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 40 © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 41 © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 46 © DeCarlo, M. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • ypothesis describing the expected relationship between sex and sexual harassment © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Directional hypothesis © Blackstone, A. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • correlation © Munroe, R. is licensed under a CC BY-NC (Attribution NonCommercial) license
  • 4.4 © Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Power_and_control_wheel © Carole Henson is licensed under a CC BY-SA (Attribution ShareAlike) license
  • Allen, K. R., Kaestle, C. E., & Goldberg, A. E. (2011). More than just a punctuation mark: How boys and young men learn about menstruation.  Journal of Family Issues, 32 , 129–156. ↵
  • Ferguson, K. M., Kim, M. A., & McCoy, S. (2011). Enhancing empowerment and leadership among homeless youth in agency and community settings: A grounded theory approach.  Child and Adolescent Social Work Journal, 28 , 1–22 ↵
  • King, R. D., Messner, S. F., & Baller, R. D. (2009). Contemporary hate crimes, law enforcement, and the legacy of racial violence.  American   Sociological Review, 74 , 291–315. ↵
  • Milkie, M. A., & Warner, C. H. (2011). Classroom learning environments and the mental health of first grade children. Journal of Health and   Social Behavior, 52 , 4–22. ↵
  • The American Sociological Association wrote a press release on Milkie and Warner’s findings: American Sociological Association. (2011). Study: Negative classroom environment adversely affects children’s mental health. Retrieved from: https://www.sciencedaily.com/releases/2011/03/110309073717.htm ↵
  • Uggen, C., & Blackstone, A. (2004). Sexual harassment as a gendered expression of power.  American  Sociological Review, 69 , 64–92. ↵
  • Blackstone, A., Houle, J., & Uggen, C. “At the time I thought it was great”: Age, experience, and workers’ perceptions of sexual harassment. Presented at the 2006 meetings of the American Sociological Association. ↵
  • Blackstone, A. (2012). Inductive or deductive? Two different approaches.  Principles of sociological inquiry: Qualitative and quantitative methods. Saylor Foundation. ↵
  • Schutt, R. K. (2006).  Investigating the social world: The process and practice of research . Thousand Oaks, CA: Pine Forge Press. ↵
  • Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49 , 261–272. ↵
  • Williams, K. R. (2005). Arrest and intimate partner violence: Toward a more complete application of deterrence theory.  Aggression and Violent Behavior ,  10 (6), 660-679. ↵
  • Policastro, C., & Payne, B. K. (2013). The blameworthy victim: Domestic violence myths and the criminalization of victimhood.  Journal of Aggression, Maltreatment & Trauma ,  22 (4), 329-347. ↵
  • Berk, R., Campbell, A., Klap, R., & Western, B. (1992). The deterrent effect of arrest in incidents of domestic violence: A Bayesian analysis of four field experiments. American Sociological Review, 57, 698–708; Pate, A., & Hamilton, E. (1992). Formal and informal deterrents to domestic violence: The Dade county spouse assault experiment. American Sociological Review, 57, 691–697; Sherman, L., & Smith, D. (1992). Crime, punishment, and stake in conformity: Legal and informal control of domestic violence. American Sociological Review, 57, 680–690. ↵
  • Taylor, B. G., Davis, R. C., & Maxwell, C. D. (2001). The effects of a group batterer treatment program: A randomized experiment in Brooklyn. Justice Quarterly, 18(1), 171-201. ↵
  • Wagner III, W. E., & Gillespie, B. J. (2018).  Using and interpreting statistics in the social, behavioral, and health sciences . SAGE Publications. ↵
  • In fact, there are empirical data that support this hypothesis. Gallup has conducted research on this very question since the 1960s. For more on their findings, see Carroll, J. (2005). Who supports marijuana legalization? Retrieved from http://www.gallup.com/poll/19561/who-supports-marijuana-legalization.aspx ↵
  • Babbie, E. (2010).  The practice of social research (12th ed.) . Belmont, CA: Wadsworth. ↵
  • Huff, D. & Geis, I. (1993).  How to lie with statistics . New York, NY: W. W. Norton & Co. ↵
  • Frankfort-Nachmias, C. & Leon-Guerrero, A. (2011).  Social statistics for a diverse society . Washington, DC: Pine Forge Press. ↵
  • Creswell, J. W., & Clark, V. L. P. (2017).  Designing and conducting mixed methods research . Sage publications. ↵
  • Engel, R. J. & Schutt, R. K. (2016).  The practice of research in social work (4th ed.) . Washington, DC: SAGE Publishing. ↵
  • Burnett, D. (2012). Inscribing knowledge: Writing research in social work. In W. Green & B. L. Simon (Eds.), The Columbia guide to social work writing  (pp. 65-82). New York, NY: Columbia University Press. ↵

when a researcher starts with a set of observations and then moves from particular experiences to a more general set of propositions about those experiences

starts by reading existing theories, then testing hypotheses and revising or confirming the theory

a statement describing a researcher’s expectation regarding what they anticipate finding

when researchers use both quantitative and qualitative methods in a project

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

provides a more general, sweeping explanation that is universally true for all people

(as in generalization) to make claims about a large population based on a smaller sample of people or items

“a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population” (Gillespie & Wagner, 2018, p. 9)

causes a change in the dependent variable

a variable that depends on changes in the independent variable

Occurs when two variables move together in the same direction - as one increases, so does the other, or, as one decreases, so does the other

occurs when two variables change in opposite directions - one goes up, the other goes down and vice versa

when the values of two variables change at the same time

as a criteria for causal relationship, the relationship must make logical sense and seem possible

as a criteria for causal relationship, the cause must come before the effect

when a relationship between two variables appears to be causal but can in fact be explained by influence of a third variable

a variable whose influence makes it difficult to understand the relationship between an independent and dependent variable

a confounding variable whose effects are accounted for mathematically in quantitative analysis to isolate the relationship between an independent and dependent variable

A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct, testing that prediction, refining the theory in light of the findings, and using that refined theory to develop new hypotheses, and so on.

conducted during the early stages of a project, usually when a researcher wants to test the feasibility of conducting a more extensive study or if the topic has not been studied in the past

research that describes or defines a particular phenomenon

explains why particular phenomena work in the way that they do; answers “why” questions

attempts to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants

in mixed methods research, this refers to the order each method is used

in mixed methods research, this refers to the order in which each method is used, either concurrently or sequentially

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Frequently asked questions

How is inductive reasoning used in research.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

Inductive and/or Deductive Research Designs

  • First Online: 27 October 2022

Cite this chapter

why does qualitative research use inductive reasoning

  • Md. Shahidul Haque 4  

3532 Accesses

2 Citations

This chapter aims to introduce the readers, especially the Bangladeshi undergraduate and postgraduate students to some fundamental considerations of inductive and deductive research designs. The deductive approach refers to testing a theory, where the researcher builds up a theory or hypotheses and plans a research stratagem to examine the formulated theory. On the contrary, the inductive approach intends to construct a theory, where the researcher begins by gathering data to establish a theory. In the beginning, a researcher must clarify which approach he/she will follow in his/her research work. The chapter discusses basic concepts, characteristics, steps and examples of inductive and deductive research designs. Here, also a comparison between inductive and deductive research designs is shown. It concludes with a look at how both inductive and deductive designs are used comprehensively to constitute a clearer image of research work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

why does qualitative research use inductive reasoning

Research Design and Methodology

why does qualitative research use inductive reasoning

Research Design

why does qualitative research use inductive reasoning

Research Questions and Research Design

Beiske, B. (2007). Research methods: Uses and limitations of questionnaires, interviews and case studies . GRIN Verlag.

Google Scholar  

Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices (2nd ed.). Global Text Project.

Brewer, J., & Hunter, A. (1989). Multi method research: A synthesis of styles . Sage Publications Ltd.

Burns, N., & Grove, S. K. (2003). Understanding nursing research (3rd ed.). Saunders.

Cambridge Dictionary. (2016a). Hypothesis. In Dictionary.cambridge.org . Retrieved October, 15, 2016, from http://dictionary.cambridge.org/dictionary/english/hypothesis .

Corbin, J. M., & Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative Sociology, 13 , 3–21. https://doi.org/10.1007/BF00988593

Article   Google Scholar  

Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). Sage Publications Inc.

Crowther, D., & Lancaster, G. (2009). Research methods: A concise introduction to research in management and business . Butterworth-Heinemann.

Easterby-Smith, M., Thorpe, R., & Lowe, A. (2002). Management research: An introduction . Sage Publications Ltd.

Engel, R. J., & Schutt, R. K. (2005). The practice of research in social work . Sage Publications Inc.

Gill, J., & Johnson, P. (2010). Research Methods for Managers (4th ed.). Sage Publications Ltd.

Goddard, W., & Melville, S. (2004). Research methodology: An introduction (2nd ed.). Blackwell Publishing.

Godfrey, J., Hodgson, A., Tarca, A., Hamilton, J., & Holmes, S. (2010). Accounting theory (7th ed). Wiley. ISBN: 978-0-470-81815-2.

Gray, D. E. (2004). Doing research in the real world . Sage Publications Ltd.

Hackley, C. (2003). Doing research projects in marketing, management and consumer research . Routledge.

Book   Google Scholar  

Lodico, M. G., Spaulding, D. T., & Voegtle, K. H. (2006). Methods in educational research: From theory to practice . John Wiley & Sons.

Merriam-Webster. (2016a). Inductive. In Merriam-Webster.com dictionary . Retrieved October 12, 2016a, from http://www.merriam-webster.com/dictionary/inductive .

Merriam-Webster. (2016b). Deductive. In Merriam-Webster.com dictionary . Retrieved October 12, 2016b, from http://www.merriam-webster.com/dictionary/deductive .

Morgan, D. L. (2014). Integrating Qualitative and Quantitative Methods: A Pragmatic Approach. SAGE Publications, Inc. https://dx.doi.org/10.4135/9781544304533

Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches . Allyn and Bacon.

Oxford Dictionary. (2016a). Inductive. In Oxford online dictionary . Retrieved October 15, 2016a, from https://en.oxforddictionaries.com/definition/inductive .

Oxford Dictionary. (2016b). Deductive. In Oxford online dictionary . Retrieved October 15, 2016b, from https://en.oxforddictionaries.com/definition/deductive .

Oxford Dictionary. (2016c). Theory. In Oxford online dictionary . Retrieved October 15, 2016c, from https://en.oxforddictionaries.com/definition/theory .

Saunders, M., Lewis, P., & Thornhill, A. (2007). Research methods for business students (5th ed.). Prentice Hall.

Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49 (2), 261–272.

Singh, K. (2006). Fundamental of research methodology and statistics. New Age International (P) Limited.

Snieder, R., & Larner, K. (2009). The art of being a scientist: A guide for graduate students and their mentors . Cambridge University Press.

Strauss, A., & Corbin, J. (1998). Basics of qualitative research (2nd ed.). Sage Publications Ltd.

Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27 (2), 237–246. https://doi.org/10.1177/1098214005283748

Trochim, W. M. K. (2006). Research methods knowledge base. Retrieved on October 12, 2016, from http://www.socialresearchmethods.net .

Wilson, J. (2010). Essentials of business research: A guide to doing your research project . Sage Publishers Ltd.

Download references

Author information

Authors and affiliations.

Department of Social Work, Jagannath University, Dhaka, 1100, Bangladesh

Md. Shahidul Haque

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Md. Shahidul Haque .

Editor information

Editors and affiliations.

Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates

M. Rezaul Islam

Department of Development Studies, University of Dhaka, Dhaka, Bangladesh

Niaz Ahmed Khan

Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa

Rajendra Baikady

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Haque, M.S. (2022). Inductive and/or Deductive Research Designs. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_5

Download citation

DOI : https://doi.org/10.1007/978-981-19-5441-2_5

Published : 27 October 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5219-7

Online ISBN : 978-981-19-5441-2

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Quantitative research questionsQuantitative research hypotheses
Descriptive research questionsSimple hypothesis
Comparative research questionsComplex hypothesis
Relationship research questionsDirectional hypothesis
Non-directional hypothesis
Associative hypothesis
Causal hypothesis
Null hypothesis
Alternative hypothesis
Working hypothesis
Statistical hypothesis
Logical hypothesis
Hypothesis-testing
Qualitative research questionsQualitative research hypotheses
Contextual research questionsHypothesis-generating
Descriptive research questions
Evaluation research questions
Explanatory research questions
Exploratory research questions
Generative research questions
Ideological research questions
Ethnographic research questions
Phenomenological research questions
Grounded theory questions
Qualitative case study questions

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Quantitative research questions
Descriptive research question
- Measures responses of subjects to variables
- Presents variables to measure, analyze, or assess
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training?
Comparative research question
- Clarifies difference between one group with outcome variable and another group without outcome variable
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)?
- Compares the effects of variables
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells?
Relationship research question
- Defines trends, association, relationships, or interactions between dependent variable and independent variable
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic?

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Quantitative research hypotheses
Simple hypothesis
- Predicts relationship between single dependent variable and single independent variable
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered.
Complex hypothesis
- Foretells relationship between two or more independent and dependent variables
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable).
Directional hypothesis
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects.
Non-directional hypothesis
- Nature of relationship between two variables or exact study direction is not identified
- Does not involve a theory
Women and men are different in terms of helpfulness. (Exact study direction is not identified)
Associative hypothesis
- Describes variable interdependency
- Change in one variable causes change in another variable
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable).
Causal hypothesis
- An effect on dependent variable is predicted from manipulation of independent variable
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient.
Null hypothesis
- A negative statement indicating no relationship or difference between 2 variables
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2).
Alternative hypothesis
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2).
Working hypothesis
- A hypothesis that is initially accepted for further research to produce a feasible theory
Dairy cows fed with concentrates of different formulations will produce different amounts of milk.
Statistical hypothesis
- Assumption about the value of population parameter or relationship among several population characteristics
- Validity tested by a statistical experiment or analysis
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2.
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan.
Logical hypothesis
- Offers or proposes an explanation with limited or no extensive evidence
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less.
Hypothesis-testing (Quantitative hypothesis-testing research)
- Quantitative research uses deductive reasoning.
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses.

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative research questions
Contextual research question
- Ask the nature of what already exists
- Individuals or groups function to further clarify and understand the natural context of real-world problems
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems)
Descriptive research question
- Aims to describe a phenomenon
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities?
Evaluation research question
- Examines the effectiveness of existing practice or accepted frameworks
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility?
Explanatory research question
- Clarifies a previously studied phenomenon and explains why it occurs
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania?
Exploratory research question
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic?
Generative research question
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative?
Ideological research question
- Aims to advance specific ideas or ideologies of a position
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care?
Ethnographic research question
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis?
Phenomenological research question
- Knows more about the phenomena that have impacted an individual
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual)
Grounded theory question
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed?
Qualitative case study question
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions
- Considers how the phenomenon is influenced by its contextual situation.
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan?
Qualitative research hypotheses
Hypothesis-generating (Qualitative hypothesis-generating research)
- Qualitative research uses inductive reasoning.
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis.
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach.

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

VariablesUnclear and weak statement (Statement 1) Clear and good statement (Statement 2) Points to avoid
Research questionWhich is more effective between smoke moxibustion and smokeless moxibustion?“Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” 1) Vague and unfocused questions
2) Closed questions simply answerable by yes or no
3) Questions requiring a simple choice
HypothesisThe smoke moxibustion group will have higher cephalic presentation.“Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group.1) Unverifiable hypotheses
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group.2) Incompletely stated groups of comparison
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” 3) Insufficiently described variables or outcomes
Research objectiveTo determine which is more effective between smoke moxibustion and smokeless moxibustion.“The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” 1) Poor understanding of the research question and hypotheses
2) Insufficient description of population, variables, or study outcomes

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

VariablesUnclear and weak statement (Statement 1)Clear and good statement (Statement 2)Points to avoid
Research questionDoes disrespect and abuse (D&A) occur in childbirth in Tanzania?How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania?1) Ambiguous or oversimplistic questions
2) Questions unverifiable by data collection and analysis
HypothesisDisrespect and abuse (D&A) occur in childbirth in Tanzania.Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania.1) Statements simply expressing facts
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania.2) Insufficiently described concepts or variables
Research objectiveTo describe disrespect and abuse (D&A) in childbirth in Tanzania.“This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” 1) Statements unrelated to the research question and hypotheses
2) Unattainable or unexplorable objectives

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

K

Kate Umacam

Why does qualitative research use inductive reasoning .

why does qualitative research use inductive reasoning

Similar Questions

What is a research proposal, how to avoid any bias in your data, what makes two case studies experimental, define each of the five methods of data collection in qualitative research, include a bibliography according to an acknowledged reference technique. use a minimum of two resources used to conduct your research, please fine tune this research topic: the impact of reading english in grade 3 learners, state advantage of observation research, survey research and case study, how would a sample concept paper look like on the topic economic feasibility of implementing smart traffic signage in developing urban areas, what are the common chapters of a research project, list probability sampling, please log in to continue., be one of the experts, frequently asked questions, terms of use, privacy policy, select language.

hu

To read this content please select one of the options below:

Please note you do not have access to teaching notes, recognising deductive processes in qualitative research.

Qualitative Market Research

ISSN : 1352-2752

Article publication date: 1 June 2000

States that there are two general approaches to reasoning which may result in the acquisition of new knowledge: inductive reasoning commences with observation of specific instances, and seeks to establish generalisations; deductive reasoning commences with generalisations, and seeks to see if these generalisations apply to specific instances. Most often, qualitative research follows an inductive process. In most instances, however, theory developed from qualitative investigation is untested theory. Both quantitative and qualitative researchers demonstrate deductive and inductive processes in their research, but fail to recognise these processes. The research paradigm followed in this article is a post‐positivist (“realist”) one. This is not incompatible with the use of qualitative research methods. Argues that the adoption of formal deductive procedures can represent an important step for assuring conviction in qualitative research findings. Discusses how, and under what circumstances, qualitative researchers might adopt formal deductive procedures in their research. One approach, theory testing by “pattern matching”, is illustrated with a sample application.

  • Marketing research
  • Qualitative techniques

Hyde, K.F. (2000), "Recognising deductive processes in qualitative research", Qualitative Market Research , Vol. 3 No. 2, pp. 82-90. https://doi.org/10.1108/13522750010322089

Copyright © 2000, MCB UP Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

More From Forbes

On whether generative ai and large language models are better at inductive reasoning or deductive reasoning and what this foretells about the future of ai.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Inductive reasoning and deductive reasoning go to battle but might need to be married together for ... [+] the sake of reaching true AI or AGI (artificial general intelligence).

In today’s column, I continue my ongoing analysis of the latest advances and breakthroughs in AI, see my extensive posted coverage at the link here , and focus in this discussion on the challenges associated with various forms of reasoning that are mathematically and computationally undertaken via modern-day generative AI and large language models (LLM). Specifically, I will do a deep dive into inductive reasoning and deductive reasoning.

Here’s the deal.

One of the biggest open questions that AI researchers and AI developers are struggling with is whether we can get AI to perform reasoning of the nature and caliber that humans seem to do.

This might at an initial cursory glance appear to be a simple question with a simple answer. But the problems are many and the question at hand is extraordinarily hard to answer. One difficulty is that we cannot say for sure the precise way that people reason. By this, I mean to say that we are only guessing when we contend that people reason in one fashion or another. The actual biochemical and wetware facets of the brain and mind are still a mystery as to how we attain cognition and higher levels of mental thinking and reasoning.

Some argue that we don’t need to physically reverse engineer the brain to proceed ahead with devising AI reasoning strategies and approaches. The viewpoint is that it would certainly be a nice insight to know what the human mind really does, that’s for sure. Nonetheless, we can strive forward to develop AI that has the appearance of human reasoning even if the means of the AI implementation is potentially utterly afield of how the mind works.

Think of it this way.

We might be satisfied if we can get AI to mimic human reasoning from an outward perspective, even if the way in which the AI computationally works is not what happens inside the heads of humans. The belief or assertion would be that you don’t have to distinctly copy the internals if the seen-to-be external performance matches or possibly exceeds what’s happening inside a human brain. I liken this to an extreme posture by noting that if you could assemble a bunch of Lego bricks and get them to seemingly perform reasoning, well, you might take that to the bank as a useful contraption, despite that it isn’t working identically as human minds are.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

That being said, if you have in fact managed to assemble Lego bricks into a human-like reasoning capacity, please let me know. Right away. A Nobel Prize is undoubtedly and indubitably soon to be on your doorstep.

The Fascinating Nature Of Human Reasoning

Please know that the word “reasoning” carries a lot of baggage.

Some would argue that we shouldn’t be using the watchword when referring to AI. The concern is that since reasoning is perceived as a human quality, talking about AI reasoning is tantamount to anthropomorphizing AI. To cope with this expressed qualm, I will try to be cautious in how I make use of the word. Just wanted to make sure you knew that some experts have acute heartburn about waving around the word “reasoning”. Let’s try to be mindful and respectful of how the word is to be used.

Disclaimer noted.

Probably the most famous primary forms of human reasoning consist of inductive reasoning and deductive reasoning.

I’m sure you’ve been indoctrinated in the basics of those two major means of reasoning. Whether the brain functions by using those reasoning methods is unresolved. It could be that we are merely rationalizing decision-making by conjuring up a logical basis for reasoning, trying to make pretty the reality of whatever truly occurs inside our heads.

Because inductive reasoning and deductive reasoning are major keystones for human reasoning, AI researchers have opted to pursue those reasoning methods to see how AI can benefit from what we seem to know about human reasoning. Yes, indeed, lots of AI research has been devoted to exploring how to craft AI that performs inductive reasoning and performs deductive reasoning.

Some results have come up with AI that is reasonably good at inductive reasoning but falters when doing deductive reasoning. Likewise, the other direction is the case too, namely that you might come up with AI that is pretty good at deductive reasoning but thin on inductive reasoning. Trying to achieve both on an equal and equally heightened basis is tricky and still being figured out.

You might be wondering what the deal is with generative AI and large language models (LLM) in terms of how those specific types of AI technology fare on inductive and deductive reasoning. I’m glad that you asked.

That’s the focus of today’s discussion.

Before we make the plunge into the meaty topic, let’s ensure we are all on the same page about inductive and deductive reasoning. Perhaps it has been a while since you had to readily know the differences between the two forms of reasoning. No worries, I’ll bring you quickly up-to-speed at a lightning pace.

An easy way to compare the two is by characterizing inductive reasoning as being a bottoms-up approach while deductive reasoning is considered a tops-down approach to reasoning.

With inductive reasoning, you observe particular facts or facets and then from that bottoms-up viewpoint try to arrive at a reasoned and reasonable generalization. Your generalization might be right. Wonderful. On the other hand, your generalization might be wrong. My point is that inductive reasoning, and also deductive reasoning, are not surefire guaranteed to be right. They are sensible approaches and improve your odds of being right, assuming you do the necessary reasoning with sufficient proficiency and alertness.

Deductive reasoning generally consists of starting with a generalization or theory and then proceeding to ascertain if observed facts or facets support the overarching belief. That is a proverbial top-down approach.

We normally expect scientists and researchers to especially utilize deductive reasoning. They come up with a theory of something and then gather evidence to gauge the validity of the theory. If they are doing this in a far and-square manner, they might find themselves having to adjust the theory based on the reality of what they discover.

Okay, we’ve covered the basics of inductive and deductive reasoning in a nutshell. I am betting you might like to see an example to help shake off any cobwebs on these matters.

Happy to oblige.

Illustrative Example Of Inductive And Deductive Reasoning

I appreciate your slogging along with me on this quick rendition of inductive and deductive reasoning. Hang in there, the setup will be worth it. Time to mull over a short example showcasing inductive reasoning versus deductive reasoning.

When my kids were young, I used to share with them the following example of inductive reasoning and deductive reasoning. Maybe you’ll find it useful. Or at least it might be useful for you to at some point share with any youngsters that you happen to know. Warning to the wise, do not share this with a fifth grader since they will likely feel insulted and angrily retort that you must believe them to be a first grader (yikes!).

Okay, here we go.

Imagine that you are standing outside and there are puffy clouds here and there. Let’s assume that on some days the clouds are there and on other days they are not. Indeed, on any given day, the clouds can readily come and go.

What is the relationship between the presence of clouds and the outdoor temperature?

That seems to be an interesting and useful inquiry. A child might be stumped, though I kind of doubt they would. If they’ve been outside with any regularity, and if clouds come and go with any regularity, the chances are they have already come up with a belief on this topic. Maybe no one explicitly asked them about it. Thus, this question might require a moment or two for a youngster to collect their thoughts.

Envision that we opt to ask a youngster to say aloud their reasoning as they figure out an answer to the posed question.

One angle would be to employ inductive reasoning to solve the problem.

It might go like this when using inductive reasoning to answer the question about clouds and outdoor temperature:

  • (1) Observation: Yesterday was cloudy, and the temperature dropped .
  • (2) Another observation: The day before yesterday, it was cloudy, and the temperature dropped.
  • (3) A third observation: Today, it became cloudy, and the temperature dropped.
  • (4) Logical conclusion: When it’s cloudy, the temperature tends to drop.

Seems sensible and orderly.

The act consisted of a bottoms-up method. There were prior and current observations that the child identified and used when processing the perplexing matter. Based on those observations, a seemingly logical conclusion can be reached. In this instance, since the clouds often were accompanied by a drop in temperature, you might suggest that when it gets cloudy the temperate will tend to drop.

Give the child a high five.

Another angle would be to employ deductive reasoning.

Here we go with answering the same question but using deductive reasoning this time:

  • Theory or premise : When the sky is cloudy, the temperature tends to drop.
  • Observation : Today it is currently cloudy.
  • Another observation. The temperature dropped once the clouds arrived.
  • Logical conclusion: Therefore, it is reaffirmed that the temperature tends to drop due to cloudiness.

The youngster began by formulating a theory or premise.

How did they come up with it?

We cannot say for sure. They may have already formed the theory based on a similar inductive reasoning process as I just gave. There is a chance too that they might not be able to articulate why they believe in the theory. It just came to them.

Again, this is the mystery of how the brain and mind function. From the outside of a person’s brain, we do not have the means to reach into their head and watch what logically happens during their thinking endeavors (we can use sensors to detect heat, chemical reactions, and other wiring-like actions, but that is not yet translatable into full-on articulation of thinking processes at a logical higher-level per se). We must take their word for whatever they proclaim has occurred inside their noggin. Even they cannot say for sure what occurred inside their head. They must guess too.

It could be that the actual internal process is nothing like the logical reasoning we think it is. People are taught that they must come up with justifications and explanations for their behavior. The explanation or justification can be something they believe happened in their heads, though maybe it is just an after-the-fact concoction based on societal and cultural demands that they provide cogent explanations.

As an aside, you might find of interest that via the use of BMI (brain-machine interfaces), researchers in neuroscience, cognitive science, AI, and other disciplines are hoping to one day figure out the inner sanctum and break the secret code of what occurs when we think and reason. See my coverage on BMI and akin advances at the link here .

One other aspect to mention about the above example of deductive reasoning about the cloud and temperature is that besides a theory or premise, the typical steps entail an effort to apply the theory to specific settings. In this instance, the child was able to reaffirm the premise due to the observation that today was cloudy and that it seemed that the temperature had dropped.

Another worthy point to bring up is that I said earlier that either or both of those reasoning methods might not necessarily produce the right conclusion. The act of having and using a bona fide method does not guarantee a correct response.

Does the presence of clouds always mean that temperatures will drop?

Exceptions could exist.

Plus, clouds alone do not impact temperature and other factors need to be incorporated.

Generative AI And The Two Major Reasoning Approaches

You are now versed in or at least refreshed about inductive and deductive reasoning. Good for you. The world is a better place accordingly.

I want to now bring up the topic of generative AI and large language models. Doing so will allow us to examine the role of inductive reasoning and deductive reasoning when it comes to the latest in generative AI and LLMs.

I’m sure you’ve heard of generative AI, the darling of the tech field these days.

Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here , just to name a few.

When using generative AI, you can tell the AI via a prompt to make use of deductive reasoning. The generative AI will appear to do so. Similarly, you can enter a prompt telling the AI to use inductive reasoning. The generative AI will appear to do so.

I am about to say something that might be surprising, so I am forewarning you and want you to mentally prepare yourself.

Have you braced yourself for what I am about to say?

When you enter a prompt telling generative AI to proceed with inductive or deductive reasoning, and then you eyewitness what appears to be such reasoning as displayed via the presented answer, there is once again a fundamental question afoot regarding the matter of what you see versus what actually happened internally.

I’ve discussed this previously in the use case of explainable AI, known as XAI, see my analysis at the link here . In brief, just because the AI tells you that it did this or that step, there is not necessarily an ironclad basis to assume that the AI solved the problem in that particular manner.

The explanation is not necessarily the actual work effort. An explanation can be an after-the-fact rationalization or made-up fiction, which is done to satisfy your request to have the AI show you the work that it did. This can be the case too when requesting to see a problem solved via inductive or deductive reasoning. The generative AI might proceed to solve the problem using something else entirely, but since you requested inductive or deductive reasoning, the displayed answer will be crafted to look as if that’s how things occurred.

Be mindful of this.

What you see could be afar of what is happening internally.

For now, let’s put that qualm aside and pretend that what we see is roughly the same as what happened to solve a given problem.

How Will Generative AI Fare On The Two Major Forms Of Reasoning

I have a thought-provoking question for you:

  • Are generative AI and LLMs better at inductive reasoning or deductive reasoning?

Take a few reflective seconds to ponder the conundrum.

Tick tock, tick tock.

The usual answer is that generative AI and LLMs are better at inductive reasoning, the bottoms-up form of reasoning.

Recall that generative AI and LLMs are devised by doing tons of data training. You can categorize data as being at the bottom side of things. Lots of “observations” are being examined. The AI is pattern-matching from the ground level up. This is similar to inductive reasoning as a process.

I trust that you can see that the inherent use of data, the data structures used, and the algorithms employed for making generative AI apps are largely reflective of leaning into an inductive reasoning milieu. Generative AI is therefore more readily suitable to employ inductive reasoning for answering questions if that’s what you ask the AI to do.

This does not somehow preclude generative AI from also or instead performing deductive reasoning. The upshot is that generative AI is likely better at inductive reasoning and that it might take some added effort or contortions to do deductive reasoning.

Let’s review a recent AI research study that empirically assessed the inductive reasoning versus deductive reasoning capabilities of generative AI.

New Research Opens Eyes On AI Reasoning

In a newly released research paper entitled “Inductive Or Deductive? Rethinking The Fundamental Reasoning Abilities Of LLMs” by Kewei Cheng, Jingfeng Yang, Haoming Jiang, Zhengyang Wang, Binxuan Huang, Ruirui Li, Shiyang Li, Zheng Li, Yifan Gao, Xian Li, Bing Yin, Yizhou Sun, arXiv , August 7, 2024, these salient points were made (excerpts):

  • “Despite the impressive achievements of LLMs in various reasoning tasks, the underlying mechanisms of their reasoning capabilities remain a subject of debate.”
  • “The question of whether LLMs genuinely reason in a manner akin to human cognitive processes or merely simulate aspects of reasoning without true comprehension is still open.”
  • “Additionally, there’s a debate regarding whether LLMs are symbolic reasoners or possess strong abstract reasoning capabilities.”
  • “While the deductive reasoning capabilities of LLMs, (i.e. their capacity to follow instructions in reasoning tasks), have received considerable attention, their abilities in true inductive reasoning remain largely unexplored.”
  • “This raises an essential question: In LLM reasoning, which poses a greater challenge - deductive or inductive reasoning?”

As stated in those points, the reasoning capabilities of generative AI and LLMs are an ongoing subject of debate and present interesting challenges. The researchers opted to explore whether inductive reasoning or deductive reasoning is the greater challenge for such AI.

They refer to the notion of whether generative AI and LLMs are symbolic reasoners.

Allow me a moment to unpack that point.

The AI field has tended to broadly divide the major approaches of devising AI into two camps, the symbolic camp and the sub-symbolic camp. Today, the sub-symbolic camp is the prevailing winner (at this time). The symbolic camp is considered somewhat old-fashioned and no longer in vogue (at this time).

For those of you familiar with the history of AI, there was a period when the symbolic approach was considered top of the heap. This was the era of expert systems (ES), rules-based systems (RBS), and often known as knowledge-based management systems (KBMS). The underlying concept was that human knowledge and human reasoning could be explicitly articulated into a set of symbolic rules. Those rules would then be encompassed into an AI program and presumably be able to perform reasoning akin to how humans do so (well, at least to the means of how we rationalize human reasoning). Some characterized this as the If-Then era, consisting of AI that contained thousands upon thousands of if-something then-something action statements.

Eventually, the rules-based systems tended to go out of favor. If you’d like to know more about the details of how those systems worked and why they were not ultimately able to fulfill the quest for top-notch AI, see my analysis at the link here .

The present era of sub-symbolics went a different route. Generative AI and LLMs are prime examples of the sub-symbolic approach. In the sub-symbolic realm, you use algorithms to do pattern matching on data. Turns out that if you use well-devised algorithms and lots of data, the result is AI that can seem to do amazing things such as having the appearance of fluent interactivity. At the core of sub-symbolics is the use of artificial neural networks (ANNs), see my in-depth explanation at the link here .

You will momentarily see that an unresolved question is whether the sub-symbolic approach can end up performing symbolic-style reasoning. There are research efforts underway of trying to logically interpret what happens inside the mathematical and computational inner workings of ANNs, see my discussion at the link here .

Getting back to the inductive versus deductive reasoning topic, let’s consider the empirical study and the means they took to examine these matters:

  • “Our research is focused on a relatively unexplored question: Which presents a greater challenge to LLMs - deductive reasoning or inductive reasoning?” (ibid).
  • “To explore this, we designed a set of comparative experiments that apply a uniform task across various contexts, each emphasizing either deductive or inductive reasoning.” (ibid).
  • “Deductive setting: we provide the models with direct input-output mappings (i.e., 𝑓𝑤).”
  • “Inductive setting: we offer the models a few examples (i.e., (𝑥, 𝑦) pairs) while intentionally leaving out input-output mappings (i.e., 𝑓𝑤).” (ibid).

Their experiment consisted of coming up with tasks for generative AI to solve, along with prompting generative AI to do the solution process by each of the two respective reasoning processes. After doing so, the solutions provided by AI could be compared to ascertain whether inductive reasoning (as performed by the AI) or deductive reasoning (as performed by the AI) did a better job of solving the presented problems.

Tasks Uniformity And Reasoning Disentanglement

The research proceeded to define a series of tasks that could be given to various generative AI apps to attempt to solve.

Notice that a uniform set of tasks was put together. This is a good move in such experiments since you want to be able to compare apples to apples. In other words, purposely aim to use inductive reasoning on a set of tasks and use deductive reasoning on the same set of tasks. Other studies will at times use a set of tasks for analyzing inductive reasoning and a different set of tasks to analyze deductive reasoning. The issue is that you end up comparing apples versus oranges and can have muddled results.

Are you wondering what kinds of tasks were used?

Here are the types of tasks they opted to apply:

  • Arithmetic task: “You are a mathematician. Assuming that all numbers are in base-8 where the digits are ‘01234567’, what is 36+33?”. (ibid).
  • Word problem: “You are an expert in linguistics. Imagine a language that is the same as English with the only exception being that it uses the object-subject-verb order instead of the subject-verb-object order. Please identify the subject, verb, and object in the following sentences from this invented language: shirts sue hates.” (ibid).
  • Spatial task: “You are in the middle of a room. You can assume that the room’s width and height are both 500 units. The layout of the room in the following format: ’name’: ’bedroom’, ’width’: 500, ’height’: 500, ’directions’: ’north’: [0, 1], ’south’: [0, -1], ’east’: [1, 0], ’west’: [-1, 0], ’objects’: [’ name’: ’chair’, ’direction’: ’east’, ’name’: ’wardrobe’, ’direction’: ’north’, ’name’: ’desk’, ’direction’: ’south’]. Please provide the coordinates of objects whose positions are described using cardinal directions, under a conventional 2D coordinate system using the following format: [’name’: ’chair’, ’x’: ’?’, ’y’: ’?’, ’name’: ’wardrobe’, ’x’: ’?’, ’y’: ’?’, ’name’: ’desk’, ’x’: ’?’, ’y’: ’?’]”. (ibid).
  • Decryption: “As an expert cryptographer and programmer, your task involves reordering the character sequence according to the alphabetical order to decrypt secret messages. Please decode the following sequence: spring.” (ibid).

Something else that they did was try to keep inductive reasoning and deductive reasoning from relying on each other.

Unfortunately, both approaches can potentially slop over into aiding the other one.

Remember for example when I mentioned that a youngster using deductive reasoning about the relationship between clouds and temperatures might have formulated a hypothesis or premise by first using inductive reasoning? If so, it is difficult to say which reasoning approach was doing the hard work in solving the problem since both approaches were potentially being undertaken at the same time.

The researchers devised a special method to see if they could avoid a problematic intertwining:

  • “To disentangle inductive reasoning from deductive reasoning, we propose a novel model, referred to as SolverLearner.” (ibid).
  • “Given our primary focus on inductive reasoning, SolverLearner follows a two-step process to segregate the learning of input-output mapping functions from the application of these functions for inference.” (ibid).
  • “Specifically, functions are applied through external interpreters, such as code interpreters, to avoid incorporating LLM-based deductive reasoning.” (ibid).
  • “By focusing on inductive reasoning and separating it from LLM-based deductive reasoning, we can isolate and investigate inductive reasoning of LLMs in its pure form via SolverLearner.” (ibid).

Kudos to them for recognizing the need to try and make that separation on a distinctive basis.

Hopefully, other researchers will take up the mantle and further pursue this avenue.

The Results And What To Make Of It

I’m sure that you are eagerly awaiting the results of what they found.

Drum roll, please.

Highlights of their key outcomes include:

  • “LLMs exhibit poor deductive reasoning capabilities, particularly in “counterfactual” tasks.” (ibid).
  • “Deductive reasoning presents a greater challenge than inductive reasoning for LLMs.” (ibid).
  • “The effectiveness of LLMs’ inductive reasoning capability is heavily reliant on the foundational model. This observation suggests that the inductive reasoning potential of LLMs is significantly constrained by the underlying model.” (ibid).
  • “Chain of Thought (COT) has not been incorporated into the comparison. Chain of Thought (COT) is a significant prompting technique designed for use with LLMs. Rather than providing a direct answer, COT elicits reasoning with intermediate steps in few-shot exemplars.” (ibid).

Let’s examine those results.

First, they reaffirmed what we would have anticipated, namely that the generative AI apps used in this experiment were generally better at employing inductive reasoning rather than deductive reasoning. I mentioned earlier that the core design and structure of generative AI and LLMs lean into inductive reasoning capabilities. Thus, this result makes intuitive sense.

For those of you who might say ho-hum to the act of reaffirming an already expected result, I’d like to emphasize that doing experiments to confirm or disconfirm hunches is a very worthwhile endeavor. You do not know for sure that a hunch is on target. By doing experiments, your willingness to believe in a hunch can be bolstered, or possibly overturned if the experiments garner surprising results.

Not every experiment has to reveal startlingly new discoveries (few do).

Second, a related and indeed interesting twist is that the inductive reasoning performance appeared to differ somewhat based on which of the generative AI apps was being used. The gist is that depending upon how the generative AI was devised by an AI maker, such as the nature of the underlying foundation model, the capacity to undertake inductive reasoning varied.

The notable point about this is that we need to be cautious in painting with a broad brush all generative AI apps and LLMs in terms of how well they might do on inductive reasoning. Subtleties in the algorithms, data structures, ANN, and data training could impact the inductive reasoning proclivities.

This is a handy reminder that not all generative AI apps and LLMs are the same.

Third, the researchers acknowledge a heady topic that I keep pounding away at in my analyses of generative AI and LLMs. It is this. The prompts that you compose and use with AI are a huge determinant of the results you will get out of the AI. For my comprehensive coverage of over fifty types of prompt engineering techniques and tips, see the link here .

In this particular experiment, the researchers used a straight-ahead prompt that was not seeking to exploit any prompt engineering wizardry. That’s fine as a starting point. It would be immensely interesting to see the experimental results if various prompting strategies were used.

One such prompting strategy would be the use of chain-of-thought (COT). In the COT approach, you explicitly instruct AI to provide a step-by-step indication of what is taking place. I’ve covered extensively the COT since it is a popular tactic and can boost your generative AI results, see my coverage at the link here , along with a similar approach known as skeleton-of-thought (SOT) at the link here.

If we opted to use COT for this experiment, what might arise?

I speculate that we might enhance inductive reasoning by having directly given a prompt that tends to seemingly spur inductive reasoning to take place. It is almost similar to my assertion that sometimes you can improve generative AI results by essentially greasing the skids, see the link here . Perhaps the inductive reasoning might be more pronounced by a double-barrel dose of guiding the AI correspondingly to that mode of operation.

Prompts do matter.

I’ll conclude this discussion with something that I hope will stir your interest.

Where is the future of AI?

Should we keep on deepening the use of sub-symbolics via ever-expanding the use of generative AI and LLMs? That would seem to be the existing course of action. Toss more computational resources at the prevailing sub-symbolic infrastructure. If you use more computing power and more data, perhaps we will attain heightened levels of generative AI, maybe verging on AGI (artificial general intelligence).

Not everyone accepts that crucial premise.

An alternative viewpoint is that we will soon reach a ceiling. No matter how much computing you manage to corral, the incremental progress is going to diminish and diminish. A limit will be reached. We won’t be at AGI. We will be better than today’s generative AI, but only marginally so. And continued forceful efforts will gain barely any additional ground. We will be potentially wasting highly expensive and prized computing on a losing battle of advancing AI.

I’ve discussed this premise at length, see the link here .

Let’s tie that thorny topic to the matter of inductive reasoning versus deductive reasoning.

If you accept the notion that inductive reasoning is more akin to sub-symbolic, and deductive reasoning is more akin to symbolic, one quietly rising belief is that we need to marry together the sub-symbolic and the symbolic. Doing so might be the juice that gets us past the presumed upcoming threshold or barrier. To break the sound barrier, as it were, we might need to focus on neuro-symbolic AI.

Neuro-symbolic AI is a combination of sub-symbolic and symbolic approaches. The goal is to harness both to their maximum potential. A major challenge involves how to best connect them into one cohesive mechanization. You don’t want them to be infighting. You don’t want them working as opposites and worsening your results instead of bettering the results. See my discussion at the link here .

I’d ask you to grab yourself a glass of fine wine, sit down in a place of solitude, and give these pressing AI questions some heartfelt thoughts:

  • Can we leverage both inductive reasoning and deductive reasoning as brethren that work hand-in-hand within AI?
  • Can we include other reasoning approaches into the mix, spurring multi-reasoning capacities?
  • Can we determine whether AI is working directly via those reasoning methods versus outwardly appearing to do so but not actively internally doing so?
  • Can we reuse whatever is learned while attempting to reverse engineer the brain and mind, such that the way that we devise AI can be enhanced or possibly even usefully overhauled?

That should keep your mind going for a while.

If you can find a fifth grader who can definitively answer those vexing and course-changing questions, make sure to have them write down their answers. It would be history in the making. You would have an AI prodigy in your midst.

Meanwhile, let’s all keep our noses to the grind and see what progress we can make on these mind-bending considerations. Join me in doing so, thanks.

Lance Eliot

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

COMMENTS

  1. Inductive Reasoning

    You begin by using qualitative methods to explore the research topic, taking an inductive reasoning approach. You collect observations by interviewing workers on the subject and analyze the data to spot any patterns.

  2. Qualitative analysis: Deductive and inductive approaches

    In short, a data analysis process that draws on both deductive and inductive analysis supports a more organized, rigorous, and analytically sound qualitative study. See below for an example of how I organize deductive and inductive analytic practices into cycles. This figure, adapted from Bingham & Witkowsky (2022) and Bingham (2023), gives an ...

  3. Inductive Approach (Inductive Reasoning)

    Inductive approach, also known in inductive reasoning, starts with the observations and theories are proposed towards the end of the research process as a result of observations [1]. Inductive research "involves the search for pattern from observation and the development of explanations - theories - for those patterns through series of ...

  4. Qualitative Research Design and Data Analysis: Deductive and Inductive

    As I see it, both deductive and inductive strategies are important in qualitative analysis, so in my work, I draw on both. I use deductive strategies to organize and focus myself, and I use inductive strategies to understand what is happening in the data, without forcing the data into what I think I'll see.

  5. Inductive Reasoning: Definition, Examples, & Methods

    Inductive reasoning plays a central role in qualitative research by allowing researchers to derive general principles and theories from specific observations or instances. Researchers begin with a set of detailed observations and gradually develop broader themes, patterns, or theories that emerge from the data.

  6. A General Inductive Approach for Analyzing Qualitative Evaluation Data

    A general inductive approach for analysis of qualitative evaluation data is described. The purposes for using an inductive approach are to (a) condense raw textual data into a brief, summary format; (b) establish clear links between the evaluation or research objectives and the summary findings derived from the raw data; and (c) develop a ...

  7. What Is Inductive Reasoning?

    Inductive reasoning is an analytical approach that involves proposing a broader theory about the research topic based on the data that you use in your study. Inductive reasoning is a bottom-up approach where researchers construct knowledge and propose new theory that emerges from the data.

  8. Quantitative and Qualitative Approaches to Generalization and

    We conclude that quantitative research may benefit from a bottom-up generalization strategy as it is employed in most qualitative research programs. Inductive reasoning forces us to think about the boundary conditions of our theories and provides a framework for generalization beyond statistical testing.

  9. Exploring Qualitatively-Derived Concepts: Inductive—Deductive Pitfalls

    Difficulties stem from the nature of induction itself - Is analytic induction an impossible operation in qualitative research, as Popper (1963/65) suggests? In this section, we first discuss Popper's concern, followed by a discussion of two major threats that may prevent an inductive approach in qualitative research.2 The first threat is the "pink elephant paradox;? the second is the ...

  10. An Introduction to Reasoning in Qualitative & Quantitative Research

    Professor Carol Rivas outlines approaches to qualitative and quantitative research. She focuses on deciding among deductive, inductive, or abductive reasoning -- or using all three.

  11. Qualitative research methods, inductive and deductive: Valuable

    The editorial focused on the utility of qualitative research to nursing knowledge development, and its relevance whether through inductive or deductive processes, even when AI and HNR become significant parts of the equation in knowledge development, and how nursing theory relates to qualitative research.

  12. Planning Qualitative Research: Design and Decision Making for New

    Abstract For students and novice researchers, the choice of qualitative approach and subsequent alignment among problems, research questions, data collection, and data analysis can be particularly tricky. Therefore, the purpose of this paper is to provide a concise explanation of four common qualitative approaches, case study, ethnography, narrative, and phenomenology, demonstrating how each ...

  13. 8. Reasoning and causality

    For this reason, inductive reasoning is most often associated with qualitative methods, though it is used in both quantitative and qualitative research. Deductive reasoning. If inductive reasoning is about creating theories from raw data, deductive reasoning is about testing theories using data. Researchers using deductive reasoning take the ...

  14. Inductive vs. Deductive Research Approach

    The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory.

  15. PDF Compare and Contrast Inductive and Deductive Research Approaches By L

    Introduction Trochim (2006) refers to two "broad methods of reasoning as the inductive and deductive approaches (p.1). He defines induction as moving from the specific to the general, while deduction begins with the general and ends with the specific; arguments based on experience or inductively, while arguments based on laws,

  16. Qualitative research: deductive and inductive approaches to data

    Design/methodology/approach Despite the numerous examples of qualitative methods of data generation, little is known particularly to the novice researcher about how to analyse qualitative data. This paper develops a model to explain in a systematic manner how to methodically analyse qualitative data using both deductive and inductive approaches.

  17. How is inductive reasoning used in research?

    How is inductive reasoning used in research? In inductive research, you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

  18. Inductive and/or Deductive Research Designs

    In inductive research, the objective of the researcher is to derive theoretical ideas and patterns from observed facts. Hence, the inductive design is also called theory-building research. In deductive design, a researcher aims to examine conceptions and patterns acquainted from theory utilizing new experimental data.

  19. The Central Role of Theory in Qualitative Research

    Abstract The use of theory in science is an ongoing debate in the production of knowledge. Related to qualitative research methods, a variety of approaches have been set forth in the literature using the terms conceptual framework, theoretical framework, paradigm, and epistemology. While these approaches are helpful in their own context, we summarize and distill them in order to build upon the ...

  20. A Practical Guide to Writing Quantitative and Qualitative Research

    It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or ...

  21. Why does qualitative research use inductive reasoning?

    Qualitative research uses inductive reasoning because it aims to generate new theories or insights based on the data collected. Inductive reasoning involves moving from specific observations to broader generalizations and theories. In qualitative research, researchers typically start with collecting data through methods such as interviews ...

  22. Recognising deductive processes in qualitative research

    In most instances, however, theory developed from qualitative investigation is untested theory. Both quantitative and qualitative researchers demonstrate deductive and inductive processes in their research, but fail to recognise these processes. The research paradigm followed in this article is a post‐positivist ("realist") one.

  23. The New Buzz Is 'Physical AI': Where Hardware And Software ...

    Following the generative AI boom, physical AI—hardware and software in one AI application—is emerging. Nvidia's CEO says advanced robotics is the next revolution of AI.

  24. Deductive Qualitative Analysis: Evaluating, Expanding, and Refining

    Although qualitative research is often equated with inductive analysis, researchers may also use deductive qualitative approaches for certain types of research questions and purposes. Deductive qualitative research allows researchers to use existing theory to examine meanings, processes, and narratives of interpersonal and intrapersonal phenomena.

  25. On Whether Generative AI And Large Language Models Are Better ...

    Research reveals whether generative AI and LLMs are better at inductive or deductive reasoning. But we need to do more. The future is neuro-symbolic AI. Here's the scoop.