How many participants do I need for qualitative research?

does qualitative research requires a large number of respondents

For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.

Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article .

Getting the numbers right

So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.

It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.

What you’re actually looking for here is what’s known as saturation.

Understanding saturation

Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.

In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.

Interestingly, in a paper titled How Many Interviews Are Enough? , authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.

Ensuring you’ve hit the right number of participants

How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.

While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based , authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.

There's also a school of thought that you should interview 5 or so people per persona. For example, if you're working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).

How your domain affects sample size

The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.

If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.

As Mitchel Seaman notes : “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”

In-person or remote

Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.

  • Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
  • Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to — instead you can reach out to anyone you’d like.
  • Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.

Is there value in outsourcing recruitment?

Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.

Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.

We’ve got one such service at Optimal Workshop, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here .

So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.

Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

Related articles

Seeing is believing.

Dive into our platform, explore our tools, and discover how easy it can be to conduct effective UX research.

does qualitative research requires a large number of respondents

InterQ Research

What’s in a Number? Understanding the Right Sample Size for Qualitative Research

  • May 3, 2019

By Julia Schaefer

Unlike quantitative research , numbers matter less when doing qualitative research.

It’s about quality, not quantity. So what’s in a number?

When thinking about sample size, it’s really important to ensure that you understand your target and have recruited the right people for the study. Whether your company is targeting moms from the Midwest with household incomes of $70k+, or teens who use Facebook for more than 8 hours a week, it’s crucial to understand the goals and objectives of the study and how the right target can help answer your essential research questions.

Determining the Right Sample Size For Qualitative Research Tip #1: Right Size for Qualitative Research

A high-quality panel includes much more than just members who are pulled from a general population. The right respondents for the study will have met all the criteria line-items identified from quantitative research studies and check the boxes that the client has identified through their own research. Only participants who match the audience specifications and background relevance expressed by the client should be actively recruited.

Determining the Right Sample Size For Qualitative Research Tip #2: No Two Studies are Alike

Choosing an appropriate study design is an important factor to consider when determining which sample size to use. There are various methods that can be used to gather insightful data, but not all methods may be applicable to your study and your project goal. In-depth interviews , focus groups , and ethnographic research are the most common methods used in qualitative market research. Each method can provide unique information and certain methods are more relevant than others. The types of questions being studied play an equally important role in deciding on a sample size.

Determining the Right Sample Size For Qualitative Research Tip #3:  Principle of Saturation and Diminishing Returns

Understanding the difference of which qualitative study to use is very important. Your study should have a large enough sample size to uncover a variety of opinions, and the sample size should be limited at the point of saturation.

Saturation occurs when adding more participants to the study does not result in obtaining additional perspectives or information. One can say there is a point of diminishing returns with larger samples, as it leads to more data but doesn’t necessarily lead to more information. A sample size should be large enough to sufficiently describe the phenomenon of interest, and address the research question at hand. However, a large sample size risks having repetitive and redundant data.

The objective of qualitative research is to reduce discovery failure, while quantitative research aims to reduce estimation error. As qualitative research works to obtain diverse opinions from a sample size on a client’s product/service/project, saturated data does benefit the project findings. As part of the analysis framework, one respondent’s opinion is enough to generate a code.

The Magic Number? Between 15-30

Based on research conducted on this issue, if you are building similar segments within the population, InterQ’s recommendation for in-depth interviews is to have a sample size of 15-30. In some cases, a minimum of 10 is sufficient, assuming there has been integrity in the recruiting process. With the goal to maintain a rigorous recruiting process, studies have noted having a sample size as little as 10 can be extremely fruitful, and still yield strong results.

Curious about qualitative research? Request a proposal today >

does qualitative research requires a large number of respondents

  • Request Proposal
  • Participate in Studies
  • Our Leadership Team
  • Our Approach
  • Mission, Vision and Core Values
  • Qualitative Research
  • Quantitative Research
  • Research Insights Workshops
  • Customer Journey Mapping
  • Millennial & Gen Z Market Research
  • Market Research Services
  • Our Clients
  • InterQ Blog

Logo for Open Educational Resources

Chapter 1. Introduction

“Science is in danger, and for that reason it is becoming dangerous” -Pierre Bourdieu, Science of Science and Reflexivity

Why an Open Access Textbook on Qualitative Research Methods?

I have been teaching qualitative research methods to both undergraduates and graduate students for many years.  Although there are some excellent textbooks out there, they are often costly, and none of them, to my mind, properly introduces qualitative research methods to the beginning student (whether undergraduate or graduate student).  In contrast, this open-access textbook is designed as a (free) true introduction to the subject, with helpful, practical pointers on how to conduct research and how to access more advanced instruction.  

Textbooks are typically arranged in one of two ways: (1) by technique (each chapter covers one method used in qualitative research); or (2) by process (chapters advance from research design through publication).  But both of these approaches are necessary for the beginner student.  This textbook will have sections dedicated to the process as well as the techniques of qualitative research.  This is a true “comprehensive” book for the beginning student.  In addition to covering techniques of data collection and data analysis, it provides a road map of how to get started and how to keep going and where to go for advanced instruction.  It covers aspects of research design and research communication as well as methods employed.  Along the way, it includes examples from many different disciplines in the social sciences.

The primary goal has been to create a useful, accessible, engaging textbook for use across many disciplines.  And, let’s face it.  Textbooks can be boring.  I hope readers find this to be a little different.  I have tried to write in a practical and forthright manner, with many lively examples and references to good and intellectually creative qualitative research.  Woven throughout the text are short textual asides (in colored textboxes) by professional (academic) qualitative researchers in various disciplines.  These short accounts by practitioners should help inspire students.  So, let’s begin!

What is Research?

When we use the word research , what exactly do we mean by that?  This is one of those words that everyone thinks they understand, but it is worth beginning this textbook with a short explanation.  We use the term to refer to “empirical research,” which is actually a historically specific approach to understanding the world around us.  Think about how you know things about the world. [1] You might know your mother loves you because she’s told you she does.  Or because that is what “mothers” do by tradition.  Or you might know because you’ve looked for evidence that she does, like taking care of you when you are sick or reading to you in bed or working two jobs so you can have the things you need to do OK in life.  Maybe it seems churlish to look for evidence; you just take it “on faith” that you are loved.

Only one of the above comes close to what we mean by research.  Empirical research is research (investigation) based on evidence.  Conclusions can then be drawn from observable data.  This observable data can also be “tested” or checked.  If the data cannot be tested, that is a good indication that we are not doing research.  Note that we can never “prove” conclusively, through observable data, that our mothers love us.  We might have some “disconfirming evidence” (that time she didn’t show up to your graduation, for example) that could push you to question an original hypothesis , but no amount of “confirming evidence” will ever allow us to say with 100% certainty, “my mother loves me.”  Faith and tradition and authority work differently.  Our knowledge can be 100% certain using each of those alternative methods of knowledge, but our certainty in those cases will not be based on facts or evidence.

For many periods of history, those in power have been nervous about “science” because it uses evidence and facts as the primary source of understanding the world, and facts can be at odds with what power or authority or tradition want you to believe.  That is why I say that scientific empirical research is a historically specific approach to understand the world.  You are in college or university now partly to learn how to engage in this historically specific approach.

In the sixteenth and seventeenth centuries in Europe, there was a newfound respect for empirical research, some of which was seriously challenging to the established church.  Using observations and testing them, scientists found that the earth was not at the center of the universe, for example, but rather that it was but one planet of many which circled the sun. [2]   For the next two centuries, the science of astronomy, physics, biology, and chemistry emerged and became disciplines taught in universities.  All used the scientific method of observation and testing to advance knowledge.  Knowledge about people , however, and social institutions, however, was still left to faith, tradition, and authority.  Historians and philosophers and poets wrote about the human condition, but none of them used research to do so. [3]

It was not until the nineteenth century that “social science” really emerged, using the scientific method (empirical observation) to understand people and social institutions.  New fields of sociology, economics, political science, and anthropology emerged.  The first sociologists, people like Auguste Comte and Karl Marx, sought specifically to apply the scientific method of research to understand society, Engels famously claiming that Marx had done for the social world what Darwin did for the natural world, tracings its laws of development.  Today we tend to take for granted the naturalness of science here, but it is actually a pretty recent and radical development.

To return to the question, “does your mother love you?”  Well, this is actually not really how a researcher would frame the question, as it is too specific to your case.  It doesn’t tell us much about the world at large, even if it does tell us something about you and your relationship with your mother.  A social science researcher might ask, “do mothers love their children?”  Or maybe they would be more interested in how this loving relationship might change over time (e.g., “do mothers love their children more now than they did in the 18th century when so many children died before reaching adulthood?”) or perhaps they might be interested in measuring quality of love across cultures or time periods, or even establishing “what love looks like” using the mother/child relationship as a site of exploration.  All of these make good research questions because we can use observable data to answer them.

What is Qualitative Research?

“All we know is how to learn. How to study, how to listen, how to talk, how to tell.  If we don’t tell the world, we don’t know the world.  We’re lost in it, we die.” -Ursula LeGuin, The Telling

At its simplest, qualitative research is research about the social world that does not use numbers in its analyses.  All those who fear statistics can breathe a sigh of relief – there are no mathematical formulae or regression models in this book! But this definition is less about what qualitative research can be and more about what it is not.  To be honest, any simple statement will fail to capture the power and depth of qualitative research.  One way of contrasting qualitative research to quantitative research is to note that the focus of qualitative research is less about explaining and predicting relationships between variables and more about understanding the social world.  To use our mother love example, the question about “what love looks like” is a good question for the qualitative researcher while all questions measuring love or comparing incidences of love (both of which require measurement) are good questions for quantitative researchers. Patton writes,

Qualitative data describe.  They take us, as readers, into the time and place of the observation so that we know what it was like to have been there.  They capture and communicate someone else’s experience of the world in his or her own words.  Qualitative data tell a story. ( Patton 2002:47 )

Qualitative researchers are asking different questions about the world than their quantitative colleagues.  Even when researchers are employed in “mixed methods” research ( both quantitative and qualitative), they are using different methods to address different questions of the study.  I do a lot of research about first-generation and working-college college students.  Where a quantitative researcher might ask, how many first-generation college students graduate from college within four years? Or does first-generation college status predict high student debt loads?  A qualitative researcher might ask, how does the college experience differ for first-generation college students?  What is it like to carry a lot of debt, and how does this impact the ability to complete college on time?  Both sets of questions are important, but they can only be answered using specific tools tailored to those questions.  For the former, you need large numbers to make adequate comparisons.  For the latter, you need to talk to people, find out what they are thinking and feeling, and try to inhabit their shoes for a little while so you can make sense of their experiences and beliefs.

Examples of Qualitative Research

You have probably seen examples of qualitative research before, but you might not have paid particular attention to how they were produced or realized that the accounts you were reading were the result of hours, months, even years of research “in the field.”  A good qualitative researcher will present the product of their hours of work in such a way that it seems natural, even obvious, to the reader.  Because we are trying to convey what it is like answers, qualitative research is often presented as stories – stories about how people live their lives, go to work, raise their children, interact with one another.  In some ways, this can seem like reading particularly insightful novels.  But, unlike novels, there are very specific rules and guidelines that qualitative researchers follow to ensure that the “story” they are telling is accurate , a truthful rendition of what life is like for the people being studied.  Most of this textbook will be spent conveying those rules and guidelines.  Let’s take a look, first, however, at three examples of what the end product looks like.  I have chosen these three examples to showcase very different approaches to qualitative research, and I will return to these five examples throughout the book.  They were all published as whole books (not chapters or articles), and they are worth the long read, if you have the time.  I will also provide some information on how these books came to be and the length of time it takes to get them into book version.  It is important you know about this process, and the rest of this textbook will help explain why it takes so long to conduct good qualitative research!

Example 1 : The End Game (ethnography + interviews)

Corey Abramson is a sociologist who teaches at the University of Arizona.   In 2015 he published The End Game: How Inequality Shapes our Final Years ( 2015 ). This book was based on the research he did for his dissertation at the University of California-Berkeley in 2012.  Actually, the dissertation was completed in 2012 but the work that was produced that took several years.  The dissertation was entitled, “This is How We Live, This is How We Die: Social Stratification, Aging, and Health in Urban America” ( 2012 ).  You can see how the book version, which was written for a more general audience, has a more engaging sound to it, but that the dissertation version, which is what academic faculty read and evaluate, has a more descriptive title.  You can read the title and know that this is a study about aging and health and that the focus is going to be inequality and that the context (place) is going to be “urban America.”  It’s a study about “how” people do something – in this case, how they deal with aging and death.  This is the very first sentence of the dissertation, “From our first breath in the hospital to the day we die, we live in a society characterized by unequal opportunities for maintaining health and taking care of ourselves when ill.  These disparities reflect persistent racial, socio-economic, and gender-based inequalities and contribute to their persistence over time” ( 1 ).  What follows is a truthful account of how that is so.

Cory Abramson spent three years conducting his research in four different urban neighborhoods.  We call the type of research he conducted “comparative ethnographic” because he designed his study to compare groups of seniors as they went about their everyday business.  It’s comparative because he is comparing different groups (based on race, class, gender) and ethnographic because he is studying the culture/way of life of a group. [4]   He had an educated guess, rooted in what previous research had shown and what social theory would suggest, that people’s experiences of aging differ by race, class, and gender.  So, he set up a research design that would allow him to observe differences.  He chose two primarily middle-class (one was racially diverse and the other was predominantly White) and two primarily poor neighborhoods (one was racially diverse and the other was predominantly African American).  He hung out in senior centers and other places seniors congregated, watched them as they took the bus to get prescriptions filled, sat in doctor’s offices with them, and listened to their conversations with each other.  He also conducted more formal conversations, what we call in-depth interviews, with sixty seniors from each of the four neighborhoods.  As with a lot of fieldwork , as he got closer to the people involved, he both expanded and deepened his reach –

By the end of the project, I expanded my pool of general observations to include various settings frequented by seniors: apartment building common rooms, doctors’ offices, emergency rooms, pharmacies, senior centers, bars, parks, corner stores, shopping centers, pool halls, hair salons, coffee shops, and discount stores. Over the course of the three years of fieldwork, I observed hundreds of elders, and developed close relationships with a number of them. ( 2012:10 )

When Abramson rewrote the dissertation for a general audience and published his book in 2015, it got a lot of attention.  It is a beautifully written book and it provided insight into a common human experience that we surprisingly know very little about.  It won the Outstanding Publication Award by the American Sociological Association Section on Aging and the Life Course and was featured in the New York Times .  The book was about aging, and specifically how inequality shapes the aging process, but it was also about much more than that.  It helped show how inequality affects people’s everyday lives.  For example, by observing the difficulties the poor had in setting up appointments and getting to them using public transportation and then being made to wait to see a doctor, sometimes in standing-room-only situations, when they are unwell, and then being treated dismissively by hospital staff, Abramson allowed readers to feel the material reality of being poor in the US.  Comparing these examples with seniors with adequate supplemental insurance who have the resources to hire car services or have others assist them in arranging care when they need it, jolts the reader to understand and appreciate the difference money makes in the lives and circumstances of us all, and in a way that is different than simply reading a statistic (“80% of the poor do not keep regular doctor’s appointments”) does.  Qualitative research can reach into spaces and places that often go unexamined and then reports back to the rest of us what it is like in those spaces and places.

Example 2: Racing for Innocence (Interviews + Content Analysis + Fictional Stories)

Jennifer Pierce is a Professor of American Studies at the University of Minnesota.  Trained as a sociologist, she has written a number of books about gender, race, and power.  Her very first book, Gender Trials: Emotional Lives in Contemporary Law Firms, published in 1995, is a brilliant look at gender dynamics within two law firms.  Pierce was a participant observer, working as a paralegal, and she observed how female lawyers and female paralegals struggled to obtain parity with their male colleagues.

Fifteen years later, she reexamined the context of the law firm to include an examination of racial dynamics, particularly how elite white men working in these spaces created and maintained a culture that made it difficult for both female attorneys and attorneys of color to thrive. Her book, Racing for Innocence: Whiteness, Gender, and the Backlash Against Affirmative Action , published in 2012, is an interesting and creative blending of interviews with attorneys, content analyses of popular films during this period, and fictional accounts of racial discrimination and sexual harassment.  The law firm she chose to study had come under an affirmative action order and was in the process of implementing equitable policies and programs.  She wanted to understand how recipients of white privilege (the elite white male attorneys) come to deny the role they play in reproducing inequality.  Through interviews with attorneys who were present both before and during the affirmative action order, she creates a historical record of the “bad behavior” that necessitated new policies and procedures, but also, and more importantly , probed the participants ’ understanding of this behavior.  It should come as no surprise that most (but not all) of the white male attorneys saw little need for change, and that almost everyone else had accounts that were different if not sometimes downright harrowing.

I’ve used Pierce’s book in my qualitative research methods courses as an example of an interesting blend of techniques and presentation styles.  My students often have a very difficult time with the fictional accounts she includes.  But they serve an important communicative purpose here.  They are her attempts at presenting “both sides” to an objective reality – something happens (Pierce writes this something so it is very clear what it is), and the two participants to the thing that happened have very different understandings of what this means.  By including these stories, Pierce presents one of her key findings – people remember things differently and these different memories tend to support their own ideological positions.  I wonder what Pierce would have written had she studied the murder of George Floyd or the storming of the US Capitol on January 6 or any number of other historic events whose observers and participants record very different happenings.

This is not to say that qualitative researchers write fictional accounts.  In fact, the use of fiction in our work remains controversial.  When used, it must be clearly identified as a presentation device, as Pierce did.  I include Racing for Innocence here as an example of the multiple uses of methods and techniques and the way that these work together to produce better understandings by us, the readers, of what Pierce studied.  We readers come away with a better grasp of how and why advantaged people understate their own involvement in situations and structures that advantage them.  This is normal human behavior , in other words.  This case may have been about elite white men in law firms, but the general insights here can be transposed to other settings.  Indeed, Pierce argues that more research needs to be done about the role elites play in the reproduction of inequality in the workplace in general.

Example 3: Amplified Advantage (Mixed Methods: Survey Interviews + Focus Groups + Archives)

The final example comes from my own work with college students, particularly the ways in which class background affects the experience of college and outcomes for graduates.  I include it here as an example of mixed methods, and for the use of supplementary archival research.  I’ve done a lot of research over the years on first-generation, low-income, and working-class college students.  I am curious (and skeptical) about the possibility of social mobility today, particularly with the rising cost of college and growing inequality in general.  As one of the few people in my family to go to college, I didn’t grow up with a lot of examples of what college was like or how to make the most of it.  And when I entered graduate school, I realized with dismay that there were very few people like me there.  I worried about becoming too different from my family and friends back home.  And I wasn’t at all sure that I would ever be able to pay back the huge load of debt I was taking on.  And so I wrote my dissertation and first two books about working-class college students.  These books focused on experiences in college and the difficulties of navigating between family and school ( Hurst 2010a, 2012 ).  But even after all that research, I kept coming back to wondering if working-class students who made it through college had an equal chance at finding good jobs and happy lives,

What happens to students after college?  Do working-class students fare as well as their peers?  I knew from my own experience that barriers continued through graduate school and beyond, and that my debtload was higher than that of my peers, constraining some of the choices I made when I graduated.  To answer these questions, I designed a study of students attending small liberal arts colleges, the type of college that tried to equalize the experience of students by requiring all students to live on campus and offering small classes with lots of interaction with faculty.  These private colleges tend to have more money and resources so they can provide financial aid to low-income students.  They also attract some very wealthy students.  Because they enroll students across the class spectrum, I would be able to draw comparisons.  I ended up spending about four years collecting data, both a survey of more than 2000 students (which formed the basis for quantitative analyses) and qualitative data collection (interviews, focus groups, archival research, and participant observation).  This is what we call a “mixed methods” approach because we use both quantitative and qualitative data.  The survey gave me a large enough number of students that I could make comparisons of the how many kind, and to be able to say with some authority that there were in fact significant differences in experience and outcome by class (e.g., wealthier students earned more money and had little debt; working-class students often found jobs that were not in their chosen careers and were very affected by debt, upper-middle-class students were more likely to go to graduate school).  But the survey analyses could not explain why these differences existed.  For that, I needed to talk to people and ask them about their motivations and aspirations.  I needed to understand their perceptions of the world, and it is very hard to do this through a survey.

By interviewing students and recent graduates, I was able to discern particular patterns and pathways through college and beyond.  Specifically, I identified three versions of gameplay.  Upper-middle-class students, whose parents were themselves professionals (academics, lawyers, managers of non-profits), saw college as the first stage of their education and took classes and declared majors that would prepare them for graduate school.  They also spent a lot of time building their resumes, taking advantage of opportunities to help professors with their research, or study abroad.  This helped them gain admission to highly-ranked graduate schools and interesting jobs in the public sector.  In contrast, upper-class students, whose parents were wealthy and more likely to be engaged in business (as CEOs or other high-level directors), prioritized building social capital.  They did this by joining fraternities and sororities and playing club sports.  This helped them when they graduated as they called on friends and parents of friends to find them well-paying jobs.  Finally, low-income, first-generation, and working-class students were often adrift.  They took the classes that were recommended to them but without the knowledge of how to connect them to life beyond college.  They spent time working and studying rather than partying or building their resumes.  All three sets of students thought they were “doing college” the right way, the way that one was supposed to do college.   But these three versions of gameplay led to distinct outcomes that advantaged some students over others.  I titled my work “Amplified Advantage” to highlight this process.

These three examples, Cory Abramson’s The End Game , Jennifer Peirce’s Racing for Innocence, and my own Amplified Advantage, demonstrate the range of approaches and tools available to the qualitative researcher.  They also help explain why qualitative research is so important.  Numbers can tell us some things about the world, but they cannot get at the hearts and minds, motivations and beliefs of the people who make up the social worlds we inhabit.  For that, we need tools that allow us to listen and make sense of what people tell us and show us.  That is what good qualitative research offers us.

How Is This Book Organized?

This textbook is organized as a comprehensive introduction to the use of qualitative research methods.  The first half covers general topics (e.g., approaches to qualitative research, ethics) and research design (necessary steps for building a successful qualitative research study).  The second half reviews various data collection and data analysis techniques.  Of course, building a successful qualitative research study requires some knowledge of data collection and data analysis so the chapters in the first half and the chapters in the second half should be read in conversation with each other.  That said, each chapter can be read on its own for assistance with a particular narrow topic.  In addition to the chapters, a helpful glossary can be found in the back of the book.  Rummage around in the text as needed.

Chapter Descriptions

Chapter 2 provides an overview of the Research Design Process.  How does one begin a study? What is an appropriate research question?  How is the study to be done – with what methods ?  Involving what people and sites?  Although qualitative research studies can and often do change and develop over the course of data collection, it is important to have a good idea of what the aims and goals of your study are at the outset and a good plan of how to achieve those aims and goals.  Chapter 2 provides a road map of the process.

Chapter 3 describes and explains various ways of knowing the (social) world.  What is it possible for us to know about how other people think or why they behave the way they do?  What does it mean to say something is a “fact” or that it is “well-known” and understood?  Qualitative researchers are particularly interested in these questions because of the types of research questions we are interested in answering (the how questions rather than the how many questions of quantitative research).  Qualitative researchers have adopted various epistemological approaches.  Chapter 3 will explore these approaches, highlighting interpretivist approaches that acknowledge the subjective aspect of reality – in other words, reality and knowledge are not objective but rather influenced by (interpreted through) people.

Chapter 4 focuses on the practical matter of developing a research question and finding the right approach to data collection.  In any given study (think of Cory Abramson’s study of aging, for example), there may be years of collected data, thousands of observations , hundreds of pages of notes to read and review and make sense of.  If all you had was a general interest area (“aging”), it would be very difficult, nearly impossible, to make sense of all of that data.  The research question provides a helpful lens to refine and clarify (and simplify) everything you find and collect.  For that reason, it is important to pull out that lens (articulate the research question) before you get started.  In the case of the aging study, Cory Abramson was interested in how inequalities affected understandings and responses to aging.  It is for this reason he designed a study that would allow him to compare different groups of seniors (some middle-class, some poor).  Inevitably, he saw much more in the three years in the field than what made it into his book (or dissertation), but he was able to narrow down the complexity of the social world to provide us with this rich account linked to the original research question.  Developing a good research question is thus crucial to effective design and a successful outcome.  Chapter 4 will provide pointers on how to do this.  Chapter 4 also provides an overview of general approaches taken to doing qualitative research and various “traditions of inquiry.”

Chapter 5 explores sampling .  After you have developed a research question and have a general idea of how you will collect data (Observations?  Interviews?), how do you go about actually finding people and sites to study?  Although there is no “correct number” of people to interview , the sample should follow the research question and research design.  Unlike quantitative research, qualitative research involves nonprobability sampling.  Chapter 5 explains why this is so and what qualities instead make a good sample for qualitative research.

Chapter 6 addresses the importance of reflexivity in qualitative research.  Related to epistemological issues of how we know anything about the social world, qualitative researchers understand that we the researchers can never be truly neutral or outside the study we are conducting.  As observers, we see things that make sense to us and may entirely miss what is either too obvious to note or too different to comprehend.  As interviewers, as much as we would like to ask questions neutrally and remain in the background, interviews are a form of conversation, and the persons we interview are responding to us .  Therefore, it is important to reflect upon our social positions and the knowledges and expectations we bring to our work and to work through any blind spots that we may have.  Chapter 6 provides some examples of reflexivity in practice and exercises for thinking through one’s own biases.

Chapter 7 is a very important chapter and should not be overlooked.  As a practical matter, it should also be read closely with chapters 6 and 8.  Because qualitative researchers deal with people and the social world, it is imperative they develop and adhere to a strong ethical code for conducting research in a way that does not harm.  There are legal requirements and guidelines for doing so (see chapter 8), but these requirements should not be considered synonymous with the ethical code required of us.   Each researcher must constantly interrogate every aspect of their research, from research question to design to sample through analysis and presentation, to ensure that a minimum of harm (ideally, zero harm) is caused.  Because each research project is unique, the standards of care for each study are unique.  Part of being a professional researcher is carrying this code in one’s heart, being constantly attentive to what is required under particular circumstances.  Chapter 7 provides various research scenarios and asks readers to weigh in on the suitability and appropriateness of the research.  If done in a class setting, it will become obvious fairly quickly that there are often no absolutely correct answers, as different people find different aspects of the scenarios of greatest importance.  Minimizing the harm in one area may require possible harm in another.  Being attentive to all the ethical aspects of one’s research and making the best judgments one can, clearly and consciously, is an integral part of being a good researcher.

Chapter 8 , best to be read in conjunction with chapter 7, explains the role and importance of Institutional Review Boards (IRBs) .  Under federal guidelines, an IRB is an appropriately constituted group that has been formally designated to review and monitor research involving human subjects .  Every institution that receives funding from the federal government has an IRB.  IRBs have the authority to approve, require modifications to (to secure approval), or disapprove research.  This group review serves an important role in the protection of the rights and welfare of human research subjects.  Chapter 8 reviews the history of IRBs and the work they do but also argues that IRBs’ review of qualitative research is often both over-inclusive and under-inclusive.  Some aspects of qualitative research are not well understood by IRBs, given that they were developed to prevent abuses in biomedical research.  Thus, it is important not to rely on IRBs to identify all the potential ethical issues that emerge in our research (see chapter 7).

Chapter 9 provides help for getting started on formulating a research question based on gaps in the pre-existing literature.  Research is conducted as part of a community, even if particular studies are done by single individuals (or small teams).  What any of us finds and reports back becomes part of a much larger body of knowledge.  Thus, it is important that we look at the larger body of knowledge before we actually start our bit to see how we can best contribute.  When I first began interviewing working-class college students, there was only one other similar study I could find, and it hadn’t been published (it was a dissertation of students from poor backgrounds).  But there had been a lot published by professors who had grown up working class and made it through college despite the odds.  These accounts by “working-class academics” became an important inspiration for my study and helped me frame the questions I asked the students I interviewed.  Chapter 9 will provide some pointers on how to search for relevant literature and how to use this to refine your research question.

Chapter 10 serves as a bridge between the two parts of the textbook, by introducing techniques of data collection.  Qualitative research is often characterized by the form of data collection – for example, an ethnographic study is one that employs primarily observational data collection for the purpose of documenting and presenting a particular culture or ethnos.  Techniques can be effectively combined, depending on the research question and the aims and goals of the study.   Chapter 10 provides a general overview of all the various techniques and how they can be combined.

The second part of the textbook moves into the doing part of qualitative research once the research question has been articulated and the study designed.  Chapters 11 through 17 cover various data collection techniques and approaches.  Chapters 18 and 19 provide a very simple overview of basic data analysis.  Chapter 20 covers communication of the data to various audiences, and in various formats.

Chapter 11 begins our overview of data collection techniques with a focus on interviewing , the true heart of qualitative research.  This technique can serve as the primary and exclusive form of data collection, or it can be used to supplement other forms (observation, archival).  An interview is distinct from a survey, where questions are asked in a specific order and often with a range of predetermined responses available.  Interviews can be conversational and unstructured or, more conventionally, semistructured , where a general set of interview questions “guides” the conversation.  Chapter 11 covers the basics of interviews: how to create interview guides, how many people to interview, where to conduct the interview, what to watch out for (how to prepare against things going wrong), and how to get the most out of your interviews.

Chapter 12 covers an important variant of interviewing, the focus group.  Focus groups are semistructured interviews with a group of people moderated by a facilitator (the researcher or researcher’s assistant).  Focus groups explicitly use group interaction to assist in the data collection.  They are best used to collect data on a specific topic that is non-personal and shared among the group.  For example, asking a group of college students about a common experience such as taking classes by remote delivery during the pandemic year of 2020.  Chapter 12 covers the basics of focus groups: when to use them, how to create interview guides for them, and how to run them effectively.

Chapter 13 moves away from interviewing to the second major form of data collection unique to qualitative researchers – observation .  Qualitative research that employs observation can best be understood as falling on a continuum of “fly on the wall” observation (e.g., observing how strangers interact in a doctor’s waiting room) to “participant” observation, where the researcher is also an active participant of the activity being observed.  For example, an activist in the Black Lives Matter movement might want to study the movement, using her inside position to gain access to observe key meetings and interactions.  Chapter  13 covers the basics of participant observation studies: advantages and disadvantages, gaining access, ethical concerns related to insider/outsider status and entanglement, and recording techniques.

Chapter 14 takes a closer look at “deep ethnography” – immersion in the field of a particularly long duration for the purpose of gaining a deeper understanding and appreciation of a particular culture or social world.  Clifford Geertz called this “deep hanging out.”  Whereas participant observation is often combined with semistructured interview techniques, deep ethnography’s commitment to “living the life” or experiencing the situation as it really is demands more conversational and natural interactions with people.  These interactions and conversations may take place over months or even years.  As can be expected, there are some costs to this technique, as well as some very large rewards when done competently.  Chapter 14 provides some examples of deep ethnographies that will inspire some beginning researchers and intimidate others.

Chapter 15 moves in the opposite direction of deep ethnography, a technique that is the least positivist of all those discussed here, to mixed methods , a set of techniques that is arguably the most positivist .  A mixed methods approach combines both qualitative data collection and quantitative data collection, commonly by combining a survey that is analyzed statistically (e.g., cross-tabs or regression analyses of large number probability samples) with semi-structured interviews.  Although it is somewhat unconventional to discuss mixed methods in textbooks on qualitative research, I think it is important to recognize this often-employed approach here.  There are several advantages and some disadvantages to taking this route.  Chapter 16 will describe those advantages and disadvantages and provide some particular guidance on how to design a mixed methods study for maximum effectiveness.

Chapter 16 covers data collection that does not involve live human subjects at all – archival and historical research (chapter 17 will also cover data that does not involve interacting with human subjects).  Sometimes people are unavailable to us, either because they do not wish to be interviewed or observed (as is the case with many “elites”) or because they are too far away, in both place and time.  Fortunately, humans leave many traces and we can often answer questions we have by examining those traces.  Special collections and archives can be goldmines for social science research.  This chapter will explain how to access these places, for what purposes, and how to begin to make sense of what you find.

Chapter 17 covers another data collection area that does not involve face-to-face interaction with humans: content analysis .  Although content analysis may be understood more properly as a data analysis technique, the term is often used for the entire approach, which will be the case here.  Content analysis involves interpreting meaning from a body of text.  This body of text might be something found in historical records (see chapter 16) or something collected by the researcher, as in the case of comment posts on a popular blog post.  I once used the stories told by student loan debtors on the website studentloanjustice.org as the content I analyzed.  Content analysis is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest.  In other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue.  This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis.

Where chapter 17 has pushed us towards data analysis, chapters 18 and 19 are all about what to do with the data collected, whether that data be in the form of interview transcripts or fieldnotes from observations.  Chapter 18 introduces the basics of coding , the iterative process of assigning meaning to the data in order to both simplify and identify patterns.  What is a code and how does it work?  What are the different ways of coding data, and when should you use them?  What is a codebook, and why do you need one?  What does the process of data analysis look like?

Chapter 19 goes further into detail on codes and how to use them, particularly the later stages of coding in which our codes are refined, simplified, combined, and organized.  These later rounds of coding are essential to getting the most out of the data we’ve collected.  As students are often overwhelmed with the amount of data (a corpus of interview transcripts typically runs into the hundreds of pages; fieldnotes can easily top that), this chapter will also address time management and provide suggestions for dealing with chaos and reminders that feeling overwhelmed at the analysis stage is part of the process.  By the end of the chapter, you should understand how “findings” are actually found.

The book concludes with a chapter dedicated to the effective presentation of data results.  Chapter 20 covers the many ways that researchers communicate their studies to various audiences (academic, personal, political), what elements must be included in these various publications, and the hallmarks of excellent qualitative research that various audiences will be expecting.  Because qualitative researchers are motivated by understanding and conveying meaning , effective communication is not only an essential skill but a fundamental facet of the entire research project.  Ethnographers must be able to convey a certain sense of verisimilitude , the appearance of true reality.  Those employing interviews must faithfully depict the key meanings of the people they interviewed in a way that rings true to those people, even if the end result surprises them.  And all researchers must strive for clarity in their publications so that various audiences can understand what was found and why it is important.

The book concludes with a short chapter ( chapter 21 ) discussing the value of qualitative research. At the very end of this book, you will find a glossary of terms. I recommend you make frequent use of the glossary and add to each entry as you find examples. Although the entries are meant to be simple and clear, you may also want to paraphrase the definition—make it “make sense” to you, in other words. In addition to the standard reference list (all works cited here), you will find various recommendations for further reading at the end of many chapters. Some of these recommendations will be examples of excellent qualitative research, indicated with an asterisk (*) at the end of the entry. As they say, a picture is worth a thousand words. A good example of qualitative research can teach you more about conducting research than any textbook can (this one included). I highly recommend you select one to three examples from these lists and read them along with the textbook.

A final note on the choice of examples – you will note that many of the examples used in the text come from research on college students.  This is for two reasons.  First, as most of my research falls in this area, I am most familiar with this literature and have contacts with those who do research here and can call upon them to share their stories with you.  Second, and more importantly, my hope is that this textbook reaches a wide audience of beginning researchers who study widely and deeply across the range of what can be known about the social world (from marine resources management to public policy to nursing to political science to sexuality studies and beyond).  It is sometimes difficult to find examples that speak to all those research interests, however. A focus on college students is something that all readers can understand and, hopefully, appreciate, as we are all now or have been at some point a college student.

Recommended Reading: Other Qualitative Research Textbooks

I’ve included a brief list of some of my favorite qualitative research textbooks and guidebooks if you need more than what you will find in this introductory text.  For each, I’ve also indicated if these are for “beginning” or “advanced” (graduate-level) readers.  Many of these books have several editions that do not significantly vary; the edition recommended is merely the edition I have used in teaching and to whose page numbers any specific references made in the text agree.

Barbour, Rosaline. 2014. Introducing Qualitative Research: A Student’s Guide. Thousand Oaks, CA: SAGE.  A good introduction to qualitative research, with abundant examples (often from the discipline of health care) and clear definitions.  Includes quick summaries at the ends of each chapter.  However, some US students might find the British context distracting and can be a bit advanced in some places.  Beginning .

Bloomberg, Linda Dale, and Marie F. Volpe. 2012. Completing Your Qualitative Dissertation . 2nd ed. Thousand Oaks, CA: SAGE.  Specifically designed to guide graduate students through the research process. Advanced .

Creswell, John W., and Cheryl Poth. 2018 Qualitative Inquiry and Research Design: Choosing among Five Traditions .  4th ed. Thousand Oaks, CA: SAGE.  This is a classic and one of the go-to books I used myself as a graduate student.  One of the best things about this text is its clear presentation of five distinct traditions in qualitative research.  Despite the title, this reasonably sized book is about more than research design, including both data analysis and how to write about qualitative research.  Advanced .

Lareau, Annette. 2021. Listening to People: A Practical Guide to Interviewing, Participant Observation, Data Analysis, and Writing It All Up .  Chicago: University of Chicago Press. A readable and personal account of conducting qualitative research by an eminent sociologist, with a heavy emphasis on the kinds of participant-observation research conducted by the author.  Despite its reader-friendliness, this is really a book targeted to graduate students learning the craft.  Advanced .

Lune, Howard, and Bruce L. Berg. 2018. 9th edition.  Qualitative Research Methods for the Social Sciences.  Pearson . Although a good introduction to qualitative methods, the authors favor symbolic interactionist and dramaturgical approaches, which limits the appeal primarily to sociologists.  Beginning .

Marshall, Catherine, and Gretchen B. Rossman. 2016. 6th edition. Designing Qualitative Research. Thousand Oaks, CA: SAGE.  Very readable and accessible guide to research design by two educational scholars.  Although the presentation is sometimes fairly dry, personal vignettes and illustrations enliven the text.  Beginning .

Maxwell, Joseph A. 2013. Qualitative Research Design: An Interactive Approach .  3rd ed. Thousand Oaks, CA: SAGE. A short and accessible introduction to qualitative research design, particularly helpful for graduate students contemplating theses and dissertations. This has been a standard textbook in my graduate-level courses for years.  Advanced .

Patton, Michael Quinn. 2002. Qualitative Research and Evaluation Methods . Thousand Oaks, CA: SAGE.  This is a comprehensive text that served as my “go-to” reference when I was a graduate student.  It is particularly helpful for those involved in program evaluation and other forms of evaluation studies and uses examples from a wide range of disciplines.  Advanced .

Rubin, Ashley T. 2021. Rocking Qualitative Social Science: An Irreverent Guide to Rigorous Research. Stanford : Stanford University Press.  A delightful and personal read.  Rubin uses rock climbing as an extended metaphor for learning how to conduct qualitative research.  A bit slanted toward ethnographic and archival methods of data collection, with frequent examples from her own studies in criminology. Beginning .

Weis, Lois, and Michelle Fine. 2000. Speed Bumps: A Student-Friendly Guide to Qualitative Research . New York: Teachers College Press.  Readable and accessibly written in a quasi-conversational style.  Particularly strong in its discussion of ethical issues throughout the qualitative research process.  Not comprehensive, however, and very much tied to ethnographic research.  Although designed for graduate students, this is a recommended read for students of all levels.  Beginning .

Patton’s Ten Suggestions for Doing Qualitative Research

The following ten suggestions were made by Michael Quinn Patton in his massive textbooks Qualitative Research and Evaluations Methods . This book is highly recommended for those of you who want more than an introduction to qualitative methods. It is the book I relied on heavily when I was a graduate student, although it is much easier to “dip into” when necessary than to read through as a whole. Patton is asked for “just one bit of advice” for a graduate student considering using qualitative research methods for their dissertation.  Here are his top ten responses, in short form, heavily paraphrased, and with additional comments and emphases from me:

  • Make sure that a qualitative approach fits the research question. The following are the kinds of questions that call out for qualitative methods or where qualitative methods are particularly appropriate: questions about people’s experiences or how they make sense of those experiences; studying a person in their natural environment; researching a phenomenon so unknown that it would be impossible to study it with standardized instruments or other forms of quantitative data collection.
  • Study qualitative research by going to the original sources for the design and analysis appropriate to the particular approach you want to take (e.g., read Glaser and Straus if you are using grounded theory )
  • Find a dissertation adviser who understands or at least who will support your use of qualitative research methods. You are asking for trouble if your entire committee is populated by quantitative researchers, even if they are all very knowledgeable about the subject or focus of your study (maybe even more so if they are!)
  • Really work on design. Doing qualitative research effectively takes a lot of planning.  Even if things are more flexible than in quantitative research, a good design is absolutely essential when starting out.
  • Practice data collection techniques, particularly interviewing and observing. There is definitely a set of learned skills here!  Do not expect your first interview to be perfect.  You will continue to grow as a researcher the more interviews you conduct, and you will probably come to understand yourself a bit more in the process, too.  This is not easy, despite what others who don’t work with qualitative methods may assume (and tell you!)
  • Have a plan for analysis before you begin data collection. This is often a requirement in IRB protocols , although you can get away with writing something fairly simple.  And even if you are taking an approach, such as grounded theory, that pushes you to remain fairly open-minded during the data collection process, you still want to know what you will be doing with all the data collected – creating a codebook? Writing analytical memos? Comparing cases?  Having a plan in hand will also help prevent you from collecting too much extraneous data.
  • Be prepared to confront controversies both within the qualitative research community and between qualitative research and quantitative research. Don’t be naïve about this – qualitative research, particularly some approaches, will be derided by many more “positivist” researchers and audiences.  For example, is an “n” of 1 really sufficient?  Yes!  But not everyone will agree.
  • Do not make the mistake of using qualitative research methods because someone told you it was easier, or because you are intimidated by the math required of statistical analyses. Qualitative research is difficult in its own way (and many would claim much more time-consuming than quantitative research).  Do it because you are convinced it is right for your goals, aims, and research questions.
  • Find a good support network. This could be a research mentor, or it could be a group of friends or colleagues who are also using qualitative research, or it could be just someone who will listen to you work through all of the issues you will confront out in the field and during the writing process.  Even though qualitative research often involves human subjects, it can be pretty lonely.  A lot of times you will feel like you are working without a net.  You have to create one for yourself.  Take care of yourself.
  • And, finally, in the words of Patton, “Prepare to be changed. Looking deeply at other people’s lives will force you to look deeply at yourself.”
  • We will actually spend an entire chapter ( chapter 3 ) looking at this question in much more detail! ↵
  • Note that this might have been news to Europeans at the time, but many other societies around the world had also come to this conclusion through observation.  There is often a tendency to equate “the scientific revolution” with the European world in which it took place, but this is somewhat misleading. ↵
  • Historians are a special case here.  Historians have scrupulously and rigorously investigated the social world, but not for the purpose of understanding general laws about how things work, which is the point of scientific empirical research.  History is often referred to as an idiographic field of study, meaning that it studies things that happened or are happening in themselves and not for general observations or conclusions. ↵
  • Don’t worry, we’ll spend more time later in this book unpacking the meaning of ethnography and other terms that are important here.  Note the available glossary ↵

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

In contrast to methodology, methods are more simply the practices and tools used to collect and analyze data.  Examples of common methods in qualitative research are interviews , observations , and documentary analysis .  One’s methodology should connect to one’s choice of methods, of course, but they are distinguishable terms.  See also methodology .

A proposed explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation.  The positing of a hypothesis is often the first step in quantitative research but not in qualitative research.  Even when qualitative researchers offer possible explanations in advance of conducting research, they will tend to not use the word “hypothesis” as it conjures up the kind of positivist research they are not conducting.

The foundational question to be addressed by the research study.  This will form the anchor of the research design, collection, and analysis.  Note that in qualitative research, the research question may, and probably will, alter or develop during the course of the research.

An approach to research that collects and analyzes numerical data for the purpose of finding patterns and averages, making predictions, testing causal relationships, and generalizing results to wider populations.  Contrast with qualitative research .

Data collection that takes place in real-world settings, referred to as “the field;” a key component of much Grounded Theory and ethnographic research.  Patton ( 2002 ) calls fieldwork “the central activity of qualitative inquiry” where “‘going into the field’ means having direct and personal contact with people under study in their own environments – getting close to people and situations being studied to personally understand the realities of minutiae of daily life” (48).

The people who are the subjects of a qualitative study.  In interview-based studies, they may be the respondents to the interviewer; for purposes of IRBs, they are often referred to as the human subjects of the research.

The branch of philosophy concerned with knowledge.  For researchers, it is important to recognize and adopt one of the many distinguishing epistemological perspectives as part of our understanding of what questions research can address or fully answer.  See, e.g., constructivism , subjectivism, and  objectivism .

An approach that refutes the possibility of neutrality in social science research.  All research is “guided by a set of beliefs and feelings about the world and how it should be understood and studied” (Denzin and Lincoln 2005: 13).  In contrast to positivism , interpretivism recognizes the social constructedness of reality, and researchers adopting this approach focus on capturing interpretations and understandings people have about the world rather than “the world” as it is (which is a chimera).

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Research based on data collected and analyzed by the research (in contrast to secondary “library” research).

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

The specific group of individuals that you will collect data from.  Contrast population.

The practice of being conscious of and reflective upon one’s own social location and presence when conducting research.  Because qualitative research often requires interaction with live humans, failing to take into account how one’s presence and prior expectations and social location affect the data collected and how analyzed may limit the reliability of the findings.  This remains true even when dealing with historical archives and other content.  Who we are matters when asking questions about how people experience the world because we, too, are a part of that world.

The science and practice of right conduct; in research, it is also the delineation of moral obligations towards research participants, communities to which we belong, and communities in which we conduct our research.

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Research, according to US federal guidelines, that involves “a living individual about whom an investigator (whether professional or student) conducting research:  (1) Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or  (2) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.”

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

A method of observational data collection taking place in a natural setting; a form of fieldwork .  The term encompasses a continuum of relative participation by the researcher (from full participant to “fly-on-the-wall” observer).  This is also sometimes referred to as ethnography , although the latter is characterized by a greater focus on the culture under observation.

A research design that employs both quantitative and qualitative methods, as in the case of a survey supplemented by interviews.

An epistemological perspective that posits the existence of reality through sensory experience similar to empiricism but goes further in denying any non-sensory basis of thought or consciousness.  In the social sciences, the term has roots in the proto-sociologist August Comte, who believed he could discern “laws” of society similar to the laws of natural science (e.g., gravity).  The term has come to mean the kinds of measurable and verifiable science conducted by quantitative researchers and is thus used pejoratively by some qualitative researchers interested in interpretation, consciousness, and human understanding.  Calling someone a “positivist” is often intended as an insult.  See also empiricism and objectivism.

A place or collection containing records, documents, or other materials of historical interest; most universities have an archive of material related to the university’s history, as well as other “special collections” that may be of interest to members of the community.

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

A word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data (Saldaña 2021:5).

Usually a verbatim written record of an interview or focus group discussion.

The primary form of data for fieldwork , participant observation , and ethnography .  These notes, taken by the researcher either during the course of fieldwork or at day’s end, should include as many details as possible on what was observed and what was said.  They should include clear identifiers of date, time, setting, and names (or identifying characteristics) of participants.

The process of labeling and organizing qualitative data to identify different themes and the relationships between them; a way of simplifying data to allow better management and retrieval of key themes and illustrative passages.  See coding frame and  codebook.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

A detailed description of any proposed research that involves human subjects for review by IRB.  The protocol serves as the recipe for the conduct of the research activity.  It includes the scientific rationale to justify the conduct of the study, the information necessary to conduct the study, the plan for managing and analyzing the data, and a discussion of the research ethical issues relevant to the research.  Protocols for qualitative research often include interview guides, all documents related to recruitment, informed consent forms, very clear guidelines on the safekeeping of materials collected, and plans for de-identifying transcripts or other data that include personal identifying information.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Research article
  • Open access
  • Published: 21 November 2018

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

  • Konstantina Vasileiou   ORCID: orcid.org/0000-0001-5047-3920 1 ,
  • Julie Barnett 1 ,
  • Susan Thorpe 2 &
  • Terry Young 3  

BMC Medical Research Methodology volume  18 , Article number:  148 ( 2018 ) Cite this article

790k Accesses

1255 Citations

169 Altmetric

Metrics details

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Peer Review reports

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 , 3 , 4 , 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

figure 1

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

Journal and year of publication

Number of interviews

Number of participants

Presence of sample size justification(s) (Yes/No)

Presence of a particular sample size justification category (Yes/No), and

Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

figure 2

The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

Pairwise comparisons following a significant Kruskal-Wallis Footnote 2 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Abbreviations

British Journal of Health Psychology

British Medical Journal

Interpretative Phenomenological Analysis

Sociology of Health & Illness

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. National Centre for Social Research 2003 https://www.heacademy.ac.uk/system/files/166_policy_hub_a_quality_framework.pdf Accessed 11 May 2018.

Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research Qual Rep. 2015;20(9):1408–16.

Google Scholar  

Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41.

Article   Google Scholar  

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Article   CAS   Google Scholar  

Sandelowski M. One is the liveliest number: the case orientation of qualitative research. Res Nurs Health. 1996;19(6):525–9.

Luborsky MR, Rubinstein RL. Sampling in qualitative research: rationale, issues. and methods Res Aging. 1995;17(1):89–113.

Marshall MN. Sampling for qualitative research. Fam Pract. 1996;13(6):522–6.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage; 1990.

van Rijnsoever FJ. (I Can’t get no) saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS One. 2017;12(7):e0181689.

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Morse JM. Determining sample size. Qual Health Res. 2000;10(1):3–5.

Gergen KJ, Josselson R, Freeman M. The promises of qualitative inquiry. Am Psychol. 2015;70(1):1–9.

Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and extending the five-user assumption: a grounded procedure for interaction evaluation. ACM Trans Comput Hum Interact. 2013;20(5):29.

Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices. 2014;11(5):513–25.

Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine; 1967.

Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269–81.

Lincoln YS, Guba EG. Naturalistic inquiry. London: Sage; 1985.

Book   Google Scholar  

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2015;26:1753–60.

Nelson J. Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research. Qual Res. 2017;17(5):554–70.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2017. https://doi.org/10.1007/s11135-017-0574-8 .

Caine K. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2016;981–992. ACM.

Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Med Res Methodol. 2011;11(1):26.

Constantinou CS, Georgiou M, Perdikogianni M. A comparative method for themes saturation (CoMeTS) in qualitative interviews. Qual Res. 2017;17(5):571–88.

Dai NT, Free C, Gendron Y. Interview-based research in accounting 2000–2014: a review. November 2016. https://ssrn.com/abstract=2711022 or https://doi.org/10.2139/ssrn.2711022 . Accessed 17 May 2018.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guetterman TC. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences. Forum Qual Soc Res. 2015;16(2):25. http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256 . Accessed 17 May 2018.

Hagaman AK, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on guest, bunce, and Johnson’s (2006) landmark study. Field Methods. 2017;29(1):23–41.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: a review of qualitative interviews in IS research. J Comput Inform Syst. 2013;54(1):11–22.

Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res 2010;11(3):8. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 . Accessed 17 May 2018.

Safman RM, Sobal J. Qualitative sample extensiveness in health education research. Health Educ Behav. 2004;31(1):9–21.

Saunders MN, Townsend K. Reporting and justifying the number of interview participants in organization and workplace research. Br J Manag. 2016;27(4):836–52.

Sobal J. 2001. Sample extensiveness in qualitative nutrition education research. J Nutr Educ. 2001;33(4):184–92.

Thomson SB. 2010. Sample size and grounded theory. JOAAG. 2010;5(1). http://www.joaag.com/uploads/5_1__Research_Note_1_Thomson.pdf . Accessed 17 May 2018.

Baker SE, Edwards R. How many qualitative interviews is enough?: expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods Review Paper. 2012; http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf . Accessed 17 May 2018.

Ogden J, Cornwell D. The role of topic, interviewee, and question in predicting rich interview data in the field of health research. Sociol Health Illn. 2010;32(7):1059–71.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Ritchie J, Lewis J, Elam G. Designing and selecting samples. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. p. 77–108.

Britten N. Qualitative research: qualitative interviews in medical research. BMJ. 1995;311(6999):251–3.

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. London: Sage; 2007.

Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015;18(6):669–84.

Emmel N. Themes, variables, and the limits to calculating sample size in qualitative research: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):685–6.

Braun V, Clarke V. (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. Int J Soc Res Methodol. 2016;19(6):739–43.

Hammersley M. Sampling and thematic analysis: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):687–8.

Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006.

Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008;8(1):137–52.

Morse JM. Data were saturated. Qual Health Res. 2015;25(5):587–8.

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Manen M, Higgins I, Riet P. A conversation with max van Manen on phenomenology in its original sense. Nurs Health Sci. 2016;18(1):4–7.

Dey I. Grounding grounded theory. San Francisco, CA: Academic Press; 1999.

Hays DG, Wood C, Dahl H, Kirk-Jenkins A. Methodological rigor in journal of counseling & development qualitative research articles: a 15-year review. J Couns Dev. 2016;94(2):172–83.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6(7): e1000097.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Boyatzis RE. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage; 1998.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22.

Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52(2):250–60.

Barroso J, Sandelowski M. Sample reporting in qualitative studies of women with HIV infection. Field Methods. 2003;15(4):386–404.

Glenton C, Carlsen B, Lewin S, Munthe-Kaas H, Colvin CJ, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data. Implement Sci. 2018;13(Suppl 1):14.

Onwuegbuzie AJ. Leech NL. A call for qualitative power analyses. Qual Quant. 2007;41(1):105–21.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Erickson F. Qualitative methods in research on teaching. In: Wittrock M, editor. Handbook of research on teaching. 3rd ed. New York: Macmillan; 1986. p. 119–61.

Bradbury-Jones C, Taylor J, Herber O. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. BMJ. 2016;i563:352.

Download references

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY, UK

Konstantina Vasileiou & Julie Barnett

School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU, UK

Susan Thorpe

Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH, UK

Terry Young

You can also search for this author in PubMed   Google Scholar

Contributions

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Corresponding author

Correspondence to Konstantina Vasileiou .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:.

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

Additional File 2:

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Additional File 3:

Data Extraction Form. (DOCX 15 kb)

Additional File 4:

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Vasileiou, K., Barnett, J., Thorpe, S. et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol 18 , 148 (2018). https://doi.org/10.1186/s12874-018-0594-7

Download citation

Received : 22 May 2018

Accepted : 29 October 2018

Published : 21 November 2018

DOI : https://doi.org/10.1186/s12874-018-0594-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sample size
  • Sample size justification
  • Sample size characterisation
  • Data adequacy
  • Qualitative health research
  • Qualitative interviews
  • Systematic analysis

BMC Medical Research Methodology

ISSN: 1471-2288

does qualitative research requires a large number of respondents

Qualitative study design: Sampling

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

As part of your research, you will need to identify "who" you need to recruit or work with to answer your research question/s. Often this population will be quite large (such as nurses or doctors across Victoria), or they may be difficult to access (such as people with mental health conditions). Sampling is a way that you can choose a smaller group of your population to research and then generalize the results of this across the larger population.

There are several ways that you can sample. Time, money, and difficulty or ease in reaching your target population will shape your sampling decisions. While there are no hard and fast rules around how many people you should involve in your research, some researchers estimate between 10 and 50 participants as being sufficient depending on your type of research and research question (Creswell & Creswell, 2018). Other study designs may require you to continue gathering data until you are no longer discovering new information ("theoretical saturation") or your data is sufficient to answer your question ("data saturation").

Why is it important to think about sampling?

It is important to match your sample as far as possible to the broader population that you wish to generalise to. The extent to which your findings can be applied to settings or people outside of who you have researched ("generalisability") can be influenced by your sample and sampling approach. For example, if you have interviewed homeless people in hospital with mental health conditions, you may not be able to generalise the results of this to every person in Australia with a mental health condition, or every person who is homeless, or every person who is in hospital. Your sampling approach will vary depending on what you are researching, but you might use a non-probability or probability (or randomised) approach.

Non-Probability sampling approaches

Non-Probability sampling is not randomised, meaning that some members of your population will have a higher chance of being included in your study than others. If you wanted to interview homeless people with mental health conditions in hospital and chose only homeless people with mental health conditions at your local hospital, this would be an example of convenience sampling; you have recruited participants who are close to hand. Other times, you may ask your participants if they can recommend other people who may be interested in the study: this is an example of snowball sampling. Lastly, you might want to ask Chief Executive Officers at rural hospitals how they support their staff mental health; this is an example of purposive sampling.

Examples of non-probability sampling include:

  • Purposive (judgemental)
  • Convenience

Probability (Randomised) sampling

Probability sampling methods are also called randomised sampling. They are generally preferred in research as this approach means that every person in a population has a chance of being selected for research. Truly randomised sampling is very complex; even a simple random sample requires the use of a random number generator to be used to select participants from a list of sampling frame of the accessible population. For example, if you were to do a probability sample of homeless people in hospital with a mental health condition, you would need to develop a table of all people matching this criteria; allocate each person a number; and then use a random number generator to find your sample pool. For this reason, while probability sampling is preferred, it may not be feasible to draw out a probability sample.

Things to remember:

  • Sampling involves selecting a small subsection of your population to generalise back to a larger population
  • Your sampling approach (probability or non-probability) will reflect how you will recruit your participants, and how generalisable your results are to the wider population
  • How many participants you include in your study will vary based on your research design, research question, and sampling approach

Further reading:

Babbie, E. (2008). The basics of social research (4th ed). Belmont: Thomson Wadsworth

Creswell, J.W. & Creswell, J.D. (2018). Research design: Qualitative, quantitative and mixed methods approaches (5th ed). Thousand Oaks: SAGE

Salkind, N.J. (2010) Encyclopedia of research design. Thousand Oaks: SAGE Publications

Vasileiou, K., Barnett, J., Thorpe, S., & Young, T. (2018). Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Medical Research Methodology, 18(148)

  • << Previous: Interviews
  • Next: Appraisal >>
  • Last Updated: Jul 3, 2024 11:46 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

does qualitative research requires a large number of respondents

Qualitative vs Quantitative Research 101

A plain-language explanation (with examples).

By: Kerryn Warren (PhD, MSc, BSc) | June 2020

So, it’s time to decide what type of research approach you’re going to use – qualitative or quantitative . And, chances are, you want to choose the one that fills you with the least amount of dread. The engineers may be keen on quantitative methods because they loathe interacting with human beings and dealing with the “soft” stuff and are far more comfortable with numbers and algorithms. On the other side, the anthropologists are probably more keen on qualitative methods because they literally have the opposite fears.

Qualitative vs Quantitative Research Explained: Data & Analysis

However, when justifying your research, “being afraid” is not a good basis for decision making. Your methodology needs to be informed by your research aims and objectives , not your comfort zone. Plus, it’s quite common that the approach you feared (whether qualitative or quantitative) is actually not that big a deal. Research methods can be learnt (usually a lot faster than you think) and software reduces a lot of the complexity of both quantitative and qualitative data analysis. Conversely, choosing the wrong approach and trying to fit a square peg into a round hole is going to create a lot more pain.

In this post, I’ll explain the qualitative vs quantitative choice in straightforward, plain language with loads of examples. This won’t make you an expert in either, but it should give you a good enough “big picture” understanding so that you can make the right methodological decision for your research.

Qualitative vs Quantitative: Overview  

  • Qualitative analysis 101
  • Quantitative analysis 101
  • How to choose which one to use
  • Data collection and analysis for qualitative and quantitative research
  • The pros and cons of both qualitative and quantitative research
  • A quick word on mixed methods

Qualitative Research 101: The Basics

The bathwater is hot.

Let us unpack that a bit. What does that sentence mean? And is it useful?

The answer is: well, it depends. If you’re wanting to know the exact temperature of the bath, then you’re out of luck. But, if you’re wanting to know how someone perceives the temperature of the bathwater, then that sentence can tell you quite a bit if you wear your qualitative hat .

Many a husband and wife have never enjoyed a bath together because of their strongly held, relationship-destroying perceptions of water temperature (or, so I’m told). And while divorce rates due to differences in water-temperature perception would belong more comfortably in “quantitative research”, analyses of the inevitable arguments and disagreements around water temperature belong snugly in the domain of “qualitative research”. This is because qualitative research helps you understand people’s perceptions and experiences  by systematically coding and analysing the data .

With qualitative research, those heated disagreements (excuse the pun) may be analysed in several ways. From interviews to focus groups to direct observation (ideally outside the bathroom, of course). You, as the researcher, could be interested in how the disagreement unfolds, or the emotive language used in the exchange. You might not even be interested in the words at all, but in the body language of someone who has been forced one too many times into (what they believe) was scalding hot water during what should have been a romantic evening. All of these “softer” aspects can be better understood with qualitative research.

In this way, qualitative research can be incredibly rich and detailed , and is often used as a basis to formulate theories and identify patterns. In other words, it’s great for exploratory research (for example, where your objective is to explore what people think or feel), as opposed to confirmatory research (for example, where your objective is to test a hypothesis). Qualitative research is used to understand human perception , world view and the way we describe our experiences. It’s about exploring and understanding a broad question, often with very few preconceived ideas as to what we may find.

But that’s not the only way to analyse bathwater, of course…

Qualitative research helps you understand people's perceptions and experiences by systematically analysing the data.

Quantitative Research 101: The Basics

The bathwater is 45 degrees Celsius.

Now, what does this mean? How can this be used?

I was once told by someone to whom I am definitely not married that he takes regular cold showers. As a person who is terrified of anything that isn’t body temperature or above, this seemed outright ludicrous. But this raises a question: what is the perfect temperature for a bath? Or at least, what is the temperature of people’s baths more broadly? (Assuming, of course, that they are bathing in water that is ideal to them). To answer this question, you need to now put on your quantitative hat .

If we were to ask 100 people to measure the temperature of their bathwater over the course of a week, we could get the average temperature for each person. Say, for instance, that Jane averages at around 46.3°C. And Billy averages around 42°C. A couple of people may like the unnatural chill of 30°C on the average weekday. And there will be a few of those striving for the 48°C that is apparently the legal limit in England (now, there’s a useless fact for you).

With a quantitative approach, this data can be analysed in heaps of ways. We could, for example, analyse these numbers to find the average temperature, or look to see how much these temperatures vary. We could see if there are significant differences in ideal water temperature between the sexes, or if there is some relationship between ideal bath water temperature and age! We could pop this information onto colourful, vibrant graphs , and use fancy words like “significant”, “correlation” and “eigenvalues”. The opportunities for nerding out are endless…

In this way, quantitative research often involves coming into your research with some level of understanding or expectation regarding the outcome, usually in the form of a hypothesis that you want to test. For example:

Hypothesis: Men prefer bathing in lower temperature water than women do.

This hypothesis can then be tested using statistical analysis. The data may suggest that the hypothesis is sound, or it may reveal that there are some nuances regarding people’s preferences. For example, men may enjoy a hotter bath on certain days.

So, as you can see, qualitative and quantitative research each have their own purpose and function. They are, quite simply, different tools for different jobs .

Need a helping hand?

does qualitative research requires a large number of respondents

Qualitative vs Quantitative Research: Which one should you use?

And here I become annoyingly vague again. The answer: it depends. As I alluded to earlier, your choice of research approach depends on what you’re trying to achieve with your research. 

If you want to understand a situation with richness and depth , and you don’t have firm expectations regarding what you might find, you’ll likely adopt a qualitative research approach. In other words, if you’re starting on a clean slate and trying to build up a theory (which might later be tested), qualitative research probably makes sense for you.

On the other hand, if you need to test an already-theorised hypothesis , or want to measure and describe something numerically, a quantitative approach will probably be best. For example, you may want to quantitatively test a theory (or even just a hypothesis) that was developed using qualitative research.

Basically, this means that your research approach should be chosen based on your broader research aims , objectives and research questions . If your research is exploratory and you’re unsure what findings may emerge, qualitative research allows you to have open-ended questions and lets people and subjects speak, in some ways, for themselves. Quantitative questions, on the other hand, will not. They’ll often be pre-categorised, or allow you to insert a numeric response. Anything that requires measurement , using a scale, machine or… a thermometer… is going to need a quantitative method.

Let’s look at an example.

Say you want to ask people about their bath water temperature preferences. There are many ways you can do this, using a survey or a questionnaire – here are 3 potential options:

  • How do you feel about your spouse’s bath water temperature preference? (Qualitative. This open-ended question leaves a lot of space so that the respondent can rant in an adequate manner).
  • What is your preferred bath water temperature? (This one’s tricky because most people don’t know or won’t have a thermometer, but this is a quantitative question with a directly numerical answer).
  • Most people who have commented on your bath water temperature have said the following (choose most relevant): It’s too hot. It’s just right. It’s too cold. (Quantitative, because you can add up the number of people who responded in each way and compare them).

The answers provided can be used in a myriad of ways, but, while quantitative responses are easily summarised through counting or calculations, categorised and visualised, qualitative responses need a lot of thought and are re-packaged in a way that tries not to lose too much meaning.

Your research approach should be chosen based on your broader research aims, objectives and research questions.

Qualitative vs Quantitative Research: Data collection and analysis

The approach to collecting and analysing data differs quite a bit between qualitative and quantitative research.

A qualitative research approach often has a small sample size (i.e. a small number of people researched) since each respondent will provide you with pages and pages of information in the form of interview answers or observations. In our water perception analysis, it would be super tedious to watch the arguments of 50 couples unfold in front of us! But 6-10 would be manageable and would likely provide us with interesting insight into the great bathwater debate.

To sum it up, data collection in qualitative research involves relatively small sample sizes but rich and detailed data.

On the other side, quantitative research relies heavily on the ability to gather data from a large sample and use it to explain a far larger population (this is called “generalisability”). In our bathwater analysis, we would need data from hundreds of people for us to be able to make a universal statement (i.e. to generalise), and at least a few dozen to be able to identify a potential pattern. In terms of data collection, we’d probably use a more scalable tool such as an online survey to gather comparatively basic data.

So, compared to qualitative research, data collection for quantitative research involves large sample sizes but relatively basic data.

Both research approaches use analyses that allow you to explain, describe and compare the things that you are interested in. While qualitative research does this through an analysis of words, texts and explanations, quantitative research does this through reducing your data into numerical form or into graphs.

There are dozens of potential analyses which each uses. For example, qualitative analysis might look at the narration (the lamenting story of love lost through irreconcilable water toleration differences), or the content directly (the words of blame, heat and irritation used in an interview). Quantitative analysis  may involve simple calculations for averages , or it might involve more sophisticated analysis that assesses the relationships between two or more variables (for example, personality type and likelihood to commit a hot water-induced crime). We discuss the many analysis options other blog posts, so I won’t bore you with the details here.

Qualitative research often features small sample sizes, whereas quantitative research relies on large, representative samples.

Qualitative vs Quantitative Research: The pros & cons on both sides

Quantitative and qualitative research fundamentally ask different kinds of questions and often have different broader research intentions. As I said earlier, they are different tools for different jobs – so we can’t really pit them off against each other. Regardless, they still each have their pros and cons.

Let’s start with qualitative “pros”

Qualitative research allows for richer , more insightful (and sometimes unexpected) results. This is often what’s needed when we want to dive deeper into a research question . When we want to find out what and how people are thinking and feeling , qualitative is the tool for the job. It’s also important research when it comes to discovery and exploration when you don’t quite know what you are looking for. Qualitative research adds meat to our understanding of the world and is what you’ll use when trying to develop theories.

Qualitative research can be used to explain previously observed phenomena , providing insights that are outside of the bounds of quantitative research, and explaining what is being or has been previously observed. For example, interviewing someone on their cold-bath-induced rage can help flesh out some of the finer (and often lost) details of a research area. We might, for example, learn that some respondents link their bath time experience to childhood memories where hot water was an out of reach luxury. This is something that would never get picked up using a quantitative approach.

There are also a bunch of practical pros to qualitative research. A small sample size means that the researcher can be more selective about who they are approaching. Linked to this is affordability . Unless you have to fork out huge expenses to observe the hunting strategies of the Hadza in Tanzania, then qualitative research often requires less sophisticated and expensive equipment for data collection and analysis.

Qualitative research benefits

Qualitative research also has its “cons”:

A small sample size means that the observations made might not be more broadly applicable. This makes it difficult to repeat a study and get similar results. For instance, what if the people you initially interviewed just happened to be those who are especially passionate about bathwater. What if one of your eight interviews was with someone so enraged by a previous experience of being run a cold bath that she dedicated an entire blog post to using this obscure and ridiculous example?

But sample is only one caveat to this research. A researcher’s bias in analysing the data can have a profound effect on the interpretation of said data. In this way, the researcher themselves can limit their own research. For instance, what if they didn’t think to ask a very important or cornerstone question because of previously held prejudices against the person they are interviewing?

Adding to this, researcher inexperience is an additional limitation . Interviewing and observing are skills honed in over time. If the qualitative researcher is not aware of their own biases and limitations, both in the data collection and analysis phase, this could make their research very difficult to replicate, and the theories or frameworks they use highly problematic.

Qualitative research takes a long time to collect and analyse data from a single source. This is often one of the reasons sample sizes are pretty small. That one hour interview? You are probably going to need to listen to it a half a dozen times. And read the recorded transcript of it a half a dozen more. Then take bits and pieces of the interview and reformulate and categorize it, along with the rest of the interviews.

Qualitative research can suffer from low generalisability, researcher bias, and  can take a long time to execute well.

Now let’s turn to quantitative “pros”:

Even simple quantitative techniques can visually and descriptively support or reject assumptions or hypotheses . Want to know the percentage of women who are tired of cold water baths? Boom! Here is the percentage, and a pie chart. And the pie chart is a picture of a real pie in order to placate the hungry, angry mob of cold-water haters.

Quantitative research is respected as being objective and viable . This is useful for supporting or enforcing public opinion and national policy. And if the analytical route doesn’t work, the remainder of the pie can be thrown at politicians who try to enforce maximum bath water temperature standards. Clear, simple, and universally acknowledged. Adding to this, large sample sizes, calculations of significance and half-eaten pies, don’t only tell you WHAT is happening in your data, but the likelihood that what you are seeing is real and repeatable in future research. This is an important cornerstone of the scientific method.

Quantitative research can be pretty fast . The method of data collection is faster on average: for instance, a quantitative survey is far quicker for the subject than a qualitative interview. The method of data analysis is also faster on average. In fact, if you are really fancy, you can code and automate your analyses as your data comes in! This means that you don’t necessarily have to worry about including a long analysis period into your research time.

Lastly – sometimes, not always, quantitative research may ensure a greater level of anonymity , which is an important ethical consideration . A survey may seem less personally invasive than an interview, for instance, and this could potentially also lead to greater honesty. Of course, this isn’t always the case. Without a sufficient sample size, respondents can still worry about anonymity – for example, a survey within a small department.

Quantitative research is typically considered to be more objective, quicker to execute and provides greater anonymity to respondents.

But there are also quantitative “cons”:

Quantitative research can be comparatively reductive – in other words, it can lead to an oversimplification of a situation. Because quantitative analysis often focuses on the averages and the general relationships between variables, it tends to ignore the outliers. Why is that one person having an ice bath once a week? With quantitative research, you might never know…

It requires large sample sizes to be used meaningfully. In order to claim that your data and results are meaningful regarding the population you are studying, you need to have a pretty chunky dataset. You need large numbers to achieve “statistical power” and “statistically significant” results – often those large sample sizes are difficult to achieve, especially for budgetless or self-funded research such as a Masters dissertation or thesis.

Quantitative techniques require a bit of practice and understanding (often more understanding than most people who use them have). And not just to do, but also to read and interpret what others have done, and spot the potential flaws in their research design (and your own). If you come from a statistics background, this won’t be a problem – but most students don’t have this luxury.

Finally, because of the assumption of objectivity (“it must be true because its numbers”), quantitative researchers are less likely to interrogate and be explicit about their own biases in their research. Sample selection, the kinds of questions asked, and the method of analysis are all incredibly important choices, but they tend to not be given as much attention by researchers, exactly because of the assumption of objectivity.

Quantitative research can be comparatively reductive - in other words, it can lead to an oversimplification of a situation.

Mixed methods: a happy medium?

Some of the richest research I’ve seen involved a mix of qualitative and quantitative research. Quantitative research allowed the researcher to paint “birds-eye view” of the issue or topic, while qualitative research enabled a richer understanding. This is the essence of mixed-methods research – it tries to achieve the best of both worlds .

In practical terms, this can take place by having open-ended questions as a part of your research survey. It can happen by having a qualitative separate section (like several interviews) to your otherwise quantitative research (an initial survey, from which, you could invite specific interviewees). Maybe it requires observations: some of which you expect to see, and can easily record, classify and quantify, and some of which are novel, and require deeper description.

A word of warning – just like with choosing a qualitative or quantitative research project, mixed methods should be chosen purposefully , where the research aims, objectives and research questions drive the method chosen. Don’t choose a mixed-methods approach just because you’re unsure of whether to use quantitative or qualitative research. Pulling off mixed methods research well is not an easy task, so approach with caution!

Recap: Qualitative vs Quantitative Research

So, just to recap what we have learned in this post about the great qual vs quant debate:

  • Qualitative research is ideal for research which is exploratory in nature (e.g. formulating a theory or hypothesis), whereas quantitative research lends itself to research which is more confirmatory (e.g. hypothesis testing)
  • Qualitative research uses data in the form of words, phrases, descriptions or ideas. It is time-consuming and therefore only has a small sample size .
  • Quantitative research uses data in the form of numbers and can be visualised in the form of graphs. It requires large sample sizes to be meaningful.
  • Your choice in methodology should have more to do with the kind of question you are asking than your fears or previously-held assumptions.
  • Mixed methods can be a happy medium, but should be used purposefully.
  • Bathwater temperature is a contentious and severely under-studied research topic.

does qualitative research requires a large number of respondents

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Martha

It was helpful

NANJE WILSON ITUKA

thanks much it has given me an inside on research. i still have issue coming out with my methodology from the topic below: strategies for the improvement of infastructure resilience to natural phenomena

Joreme

Waoo! Simplifies language. I have read this several times and had probs. Today it is very clear. Bravo

Trackbacks/Pingbacks

  • What Is Research Methodology? Simple Definition (With Examples) - Grad Coach - […] Qualitative, quantitative and mixed-methods are different types of methodologies, distinguished by whether they focus on words, numbers or both. This…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Find the right market research agencies, suppliers, platforms, and facilities by exploring the services and solutions that best match your needs

list of top MR Specialties

Browse all specialties

Browse Companies and Platforms

by Specialty

by Location

Browse Focus Group Facilities

does qualitative research requires a large number of respondents

Manage your listing

Follow a step-by-step guide with online chat support to create or manage your listing.

About Greenbook Directory

IIEX Conferences

Discover the future of insights at the Insight Innovation Exchange (IIEX) event closest to you

IIEX Virtual Events

Explore important trends, best practices, and innovative use cases without leaving your desk

Insights Tech Showcase

See the latest research tech in action during curated interactive demos from top vendors

Stay updated on what’s new in insights and learn about solutions to the challenges you face

Greenbook Future list

An esteemed awards program that supports and encourages the voices of emerging leaders in the insight community.

Insight Innovation Competition

Submit your innovation that could impact the insights and market research industry for the better.

Find your next position in the world's largest database of market research and data analytics jobs.

does qualitative research requires a large number of respondents

For Suppliers

Directory: Renew your listing

Directory: Create a listing

Event sponsorship

Get Recommended Program

Digital Ads

Content marketing

Ads in Reports

Podcasts sponsorship

Run your Webinar

Host a Tech Showcase

Future List Partnership

All services

does qualitative research requires a large number of respondents

Dana Stanley

Greenbook’s Chief Revenue Officer

What is the ideal Sample Size in Qualitative Research?

Presented by InterQ Research LLC

If we were to assemble a list of “most asked questions” that we receive from new clients, it’s this:

What is the ideal sample size in qualitative research? It’s a great question. A fantastic one. Because panel size does matter, though perhaps not as much as it does in quantitative research, when we’re aiming for a statistically meaningful number. Let’s explore this whole issue of panel size and what you should be looking for from participant panels when conducing qualitative research.

First off, look at quality versus quantity

Most likely, your company is looking for market research on a very specific audience type. B2B decision makers in human resources. Moms who live in the Midwest and have household incomes of $70k +. Teens who use Facebook more than 8 hours a week. Specificity is great thing, and without fail, every client we work with has a good grasp on their audience type. In qualitative panels, therefore, our first objective is to ensure that we’re recruiting people who meet each and every criteria line-item that we identify through quantitative research  – and the criteria that our clients have pinpointed through their own research. Panel quality – having the right members in the panel – is so much more important than just pulling from a general population that falls within broad parameters. So first and foremost, we focus on recruiting the right respondents who match our audience specifications.

Study design in qualitative research

The type of qualitative study chosen is also one of the most important factors to consider when choosing sample size. In-depth interviews, focus groups, and ethnographic research are the most common methods used in qualitative market research, and the types of questions being studied have an equally important factor as the sample size chosen for these various methods. One of the most important principles to keep in mind – in all of these study designs – is the principle of saturation .

The objective of qualitative research (as compared to quantitative research) is to lessen discovery failure; in quantitative research, the objective is to reduce estimation error. Here’s where the principle of saturation comes in: With saturation, we say that the collection of new data isn’t giving the researcher any new additional insights into the issue being investigated. Qualitative seeks to uncover diverse opinions from the sample size, and one person’s opinion is enough to generate a code (part of the analysis framework). There is a point of diminishing return with larger samples; more data does not necessarily lead to more information – it simply leads to the same information being repeated (saturation). The goal, therefore, is to have a large enough sample size in a qualitative study that we’re able to uncover a range of opinions, but to cut the sample size off at the number where we’re getting saturation and repetitive data.

So … is there a magical number to aim for in qualitative research?

So now we’re back to our original question:

What is the ideal sample size in qualitative research?

We’ll answer it this time. Based on studies that have been done in academia  on this very issue, 30 seems to be an ideal sample size for the most comprehensive view, but studies can have as little as 10 total participants and still yield extremely fruitful, and applicable, results. (This goes back to excellence in recruiting.)

Our general recommendation for in-depth interviews is a sample size of 30, if we’re building a study that includes similar segments within the population. A minimum size can be 10 – but again, this assumes the population integrity in recruiting.

Presented by

InterQ Research LLC

San Francisco, California

SOCIAL MEDIA

Save to my lists

Featured expert

InterQ Research LLC

Full Service

Qualitative Research

Quantitative Research

Headquartered in Silicon Valley, InterQ delivers innovative market research for the tech industry, including qualitative, quantitative, and UX.

Why choose InterQ Research LLC

does qualitative research requires a large number of respondents

Tech industry specialist

B2B complex recruiting

Innovative methodologies

Big brand experience

Proven results

Learn more about InterQ Research LLC

Sign Up for Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

does qualitative research requires a large number of respondents

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*

Get the latest updates from top market research, insights, and analytics experts delivered weekly to your inbox

Your guide for all things market research and consumer insights

Create a New Listing

Manage My Listing

Find Companies

Find Focus Group Facilities

Tech Showcases

GRIT Report

Expert Channels

Get in touch

Marketing Services

Future List

Publish With Us

Privacy policy

Cookie policy

Terms of use

Copyright © 2024 New York AMA Communication Services, Inc. All rights reserved. 234 5th Avenue, 2nd Floor, New York, NY 10001 | Phone: (212) 849-2752

What is Qualitative in Qualitative Research

  • Open access
  • Published: 27 February 2019
  • Volume 42 , pages 139–160, ( 2019 )

Cite this article

You have full access to this open access article

does qualitative research requires a large number of respondents

  • Patrik Aspers 1 , 2 &
  • Ugo Corte 3  

624k Accesses

335 Citations

24 Altmetric

Explore all metrics

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

Similar content being viewed by others

does qualitative research requires a large number of respondents

What is Qualitative in Research

Unsettling definitions of qualitative research, what is “qualitative” in qualitative research why the answer does not matter but the question is important, explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Åkerström, Malin. 2013. Curiosity and serendipity in qualitative research. Qualitative Sociology Review 9 (2): 10–18.

Google Scholar  

Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.

Alvesson, Mats, and Dan Kärreman. 2011. Qualitative research and theory development. Mystery as method . London: SAGE Publications.

Book   Google Scholar  

Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.

Atkinson, Paul. 2005. Qualitative research. Unity and diversity. Forum: Qualitative Social Research 6 (3): 1–15.

Becker, Howard S. 1963. Outsiders. Studies in the sociology of deviance . New York: The Free Press.

Becker, Howard S. 1966. Whose side are we on? Social Problems 14 (3): 239–247.

Article   Google Scholar  

Becker, Howard S. 1970. Sociological work. Method and substance . New Brunswick: Transaction Books.

Becker, Howard S. 1996. The epistemology of qualitative research. In Ethnography and human development. Context and meaning in social inquiry , ed. Jessor Richard, Colby Anne, and Richard A. Shweder, 53–71. Chicago: University of Chicago Press.

Becker, Howard S. 1998. Tricks of the trade. How to think about your research while you're doing it . Chicago: University of Chicago Press.

Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.

Becker, Howard, Blanche Geer, Everett Hughes, and Anselm Strauss. 1961. Boys in White, student culture in medical school . New Brunswick: Transaction Publishers.

Berezin, Mabel. 2014. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology 37 (2): 141–151.

Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .

Biernacki, Richard. 2014. Humanist interpretation versus coding text samples. Qualitative Sociology 37 (2): 173–188.

Blumer, Herbert. 1969. Symbolic interactionism: Perspective and method . Berkeley: University of California Press.

Brady, Henry, David Collier, and Jason Seawright. 2004. Refocusing the discussion of methodology. In Rethinking social inquiry. Diverse tools, shared standards , ed. Brady Henry and Collier David, 3–22. Lanham: Rowman and Littlefield.

Brown, Allison P. 2010. Qualitative method and compromise in applied social research. Qualitative Research 10 (2): 229–248.

Charmaz, Kathy. 2006. Constructing grounded theory . London: Sage.

Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.

Creswell, John W. 2009. Research design. Qualitative, quantitative, and mixed method approaches . 3rd ed. Thousand Oaks: SAGE Publications.

Davidsson, David. 1988. 2001. The myth of the subjective. In Subjective, intersubjective, objective , ed. David Davidsson, 39–52. Oxford: Oxford University Press.

Denzin, Norman K. 1970. The research act: A theoretical introduction to Ssociological methods . Chicago: Aldine Publishing Company Publishers.

Denzin, Norman K., and Yvonna S. Lincoln. 2003. Introduction. The discipline and practice of qualitative research. In Collecting and interpreting qualitative materials , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–45. Thousand Oaks: SAGE Publications.

Denzin, Norman K., and Yvonna S. Lincoln. 2005. Introduction. The discipline and practice of qualitative research. In The Sage handbook of qualitative research , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–32. Thousand Oaks: SAGE Publications.

Emerson, Robert M., ed. 1988. Contemporary field research. A collection of readings . Prospect Heights: Waveland Press.

Emerson, Robert M., Rachel I. Fretz, and Linda L. Shaw. 1995. Writing ethnographic fieldnotes . Chicago: University of Chicago Press.

Esterberg, Kristin G. 2002. Qualitative methods in social research . Boston: McGraw-Hill.

Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.

Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.

Fine, Gary Alan, and Black Hawk Hancock. 2017. The new ethnographer at work. Qualitative Research 17 (2): 260–268.

Fine, Gary Alan, and Timothy Hallett. 2014. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography 3 (2): 188–203.

Flick, Uwe. 2002. Qualitative research. State of the art. Social Science Information 41 (1): 5–24.

Flick, Uwe. 2007. Designing qualitative research . London: SAGE Publications.

Frankfort-Nachmias, Chava, and David Nachmias. 1996. Research methods in the social sciences . 5th ed. London: Edward Arnold.

Franzosi, Roberto. 2010. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society 39 (6): 593–629.

Franzosi, Roberto. 2016. From method and measurement to narrative and number. International journal of social research methodology 19 (1): 137–141.

Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.

Gans, Herbert. 1999. Participant Observation in an Age of “Ethnography”. Journal of Contemporary Ethnography 28 (5): 540–548.

Geertz, Clifford. 1973. The interpretation of cultures . New York: Basic Books.

Gilbert, Nigel. 2009. Researching social life . 3rd ed. London: SAGE Publications.

Glaeser, Andreas. 2014. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology 37: 207–241.

Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.

Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton: Princeton University Press.

Goffman, Erving. 1989. On fieldwork. Journal of Contemporary Ethnography 18 (2): 123–132.

Goodwin, Jeff, and Ruth Horowitz. 2002. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology 25 (1): 33–47.

Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.

Hammersley, Martyn. 2007. The issue of quality in qualitative research. International Journal of Research & Method in Education 30 (3): 287–305.

Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.

Hammersley, Martyn. 2018. What is ethnography? Can it survive should it? Ethnography and Education 13 (1): 1–17.

Hammersley, Martyn, and Paul Atkinson. 2007. Ethnography. Principles in practice . London: Tavistock Publications.

Heidegger, Martin. [1927] 2001. Sein und Zeit . Tübingen: Max Niemeyer Verlag.

Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.

Hempel, Carl G. 1966. Philosophy of the natural sciences . Upper Saddle River: Prentice Hall.

Hood, Jane C. 2006. Teaching against the text. The case of qualitative methods. Teaching Sociology 34 (3): 207–223.

James, William. 1907. 1955. Pragmatism . New York: Meredian Books.

Jovanović, Gordana. 2011. Toward a social history of qualitative research. History of the Human Sciences 24 (2): 1–27.

Kalof, Linda, Amy Dan, and Thomas Dietz. 2008. Essentials of social research . London: Open University Press.

Katz, Jack. 2015. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research 44 (1): 108–144.

King, Gary, Robert O. Keohane, S. Sidney, and S. Verba. 1994. Designing social inquiry. In Scientific inference in qualitative research . Princeton: Princeton University Press.

Chapter   Google Scholar  

Lamont, Michelle. 2004. Evaluating qualitative research: Some empirical findings and an agenda. In Report from workshop on interdisciplinary standards for systematic qualitative research , ed. M. Lamont and P. White, 91–95. Washington, DC: National Science Foundation.

Lamont, Michèle, and Ann Swidler. 2014. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology 37 (2): 153–171.

Lazarsfeld, Paul, and Alan Barton. 1982. Some functions of qualitative analysis in social research. In The varied sociology of Paul Lazarsfeld , ed. Patricia Kendall, 239–285. New York: Columbia University Press.

Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; https://doi.org/10.1177/0049124114554458 .

Lofland, John, and Lyn Lofland. 1995. Analyzing social settings. A guide to qualitative observation and analysis . 3rd ed. Belmont: Wadsworth.

Lofland, John, David A. Snow, Leon Anderson, and Lyn H. Lofland. 2006. Analyzing social settings. A guide to qualitative observation and analysis . 4th ed. Belmont: Wadsworth/Thomson Learning.

Long, Adrew F., and Mary Godfrey. 2004. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology 7 (2): 181–196.

Lundberg, George. 1951. Social research: A study in methods of gathering data . New York: Longmans, Green and Co..

Malinowski, Bronislaw. 1922. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea . London: Routledge.

Manicas, Peter. 2006. A realist philosophy of science: Explanation and understanding . Cambridge: Cambridge University Press.

Marchel, Carol, and Stephanie Owens. 2007. Qualitative research in psychology. Could William James get a job? History of Psychology 10 (4): 301–324.

McIntyre, Lisa J. 2005. Need to know. Social science research methods . Boston: McGraw-Hill.

Merton, Robert K., and Elinor Barber. 2004. The travels and adventures of serendipity. A Study in Sociological Semantics and the Sociology of Science . Princeton: Princeton University Press.

Mannay, Dawn, and Melanie Morgan. 2015. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘. Qualitative Research 15 (2): 166–182.

Neuman, Lawrence W. 2007. Basics of social research. Qualitative and quantitative approaches . 2nd ed. Boston: Pearson Education.

Ragin, Charles C. 1994. Constructing social research. The unity and diversity of method . Thousand Oaks: Pine Forge Press.

Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf

Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).

Schütz, Alfred. 1962. Collected papers I: The problem of social reality . The Hague: Nijhoff.

Seiffert, Helmut. 1992. Einführung in die Hermeneutik . Tübingen: Franke.

Silverman, David. 2005. Doing qualitative research. A practical handbook . 2nd ed. London: SAGE Publications.

Silverman, David. 2009. A very short, fairly interesting and reasonably cheap book about qualitative research . London: SAGE Publications.

Silverman, David. 2013. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review 9 (2): 48–55.

Small, Mario L. 2009. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography 10 (1): 5–38.

Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.

Snow, David A., and Leon Anderson. 1993. Down on their luck: A study of homeless street people . Berkeley: University of California Press.

Snow, David A., and Calvin Morrill. 1995. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography 24 (3): 341–349.

Strauss, Anselm L. 2003. Qualitative analysis for social scientists . 14th ed. Chicago: Cambridge University Press.

Strauss, Anselm L., and Juliette M. Corbin. 1998. Basics of qualitative research. Techniques and procedures for developing grounded theory . 2nd ed. Thousand Oaks: Sage Publications.

Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.

Swedberg, Richard. 1990. The new 'Battle of Methods'. Challenge January–February 3 (1): 33–38.

Timmermans, Stefan, and Iddo Tavory. 2012. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory 30 (3): 167–186.

Trier-Bieniek, Adrienne. 2012. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research 12 (6): 630–644.

Valsiner, Jaan. 2000. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information 39 (1): 99–113.

Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.

Download references

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Author information

Authors and affiliations.

Department of Sociology, Uppsala University, Uppsala, Sweden

Patrik Aspers

Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Patrik Aspers .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Aspers, P., Corte, U. What is Qualitative in Qualitative Research. Qual Sociol 42 , 139–160 (2019). https://doi.org/10.1007/s11133-019-9413-7

Download citation

Published : 27 February 2019

Issue Date : 01 June 2019

DOI : https://doi.org/10.1007/s11133-019-9413-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Epistemology
  • Philosophy of science
  • Phenomenology
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • An Bras Dermatol
  • v.89(4); Jul-Aug 2014

Sample size: how many participants do I need in my research? *

Jeovany martínez-mesa.

1 Latin American Cooperative Oncology Group - Porto Alegre (RS), Brazil.

David Alejandro González-Chica

2 Universidade Federal de Santa Catarina (UFSC) - Florianópolis (SC), Brazil.

João Luiz Bastos

Renan rangel bonamigo.

3 Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA) - Porto Alegre (RS), Brazil.

Rodrigo Pereira Duquia

The importance of estimating sample sizes is rarely understood by researchers, when planning a study. This paper aims to highlight the centrality of sample size estimations in health research. Examples that help in understanding the basic concepts involved in their calculation are presented. The scenarios covered are based more on the epidemiological reasoning and less on mathematical formulae. Proper calculation of the number of participants in a study diminishes the likelihood of errors, which are often associated with adverse consequences in terms of economic, ethical and health aspects.

INTRODUCTION

Investigations in the health field are oriented by research problems or questions, which should be clearly defined in the study project. Sample size calculation is an essential item to be included in the project to reduce the probability of error, respect ethical standards, define the logistics of the study and, last but not least, improve its success rates, when evaluated by funding agencies.

Let us imagine that a group of investigators decides to study the frequency of sunscreen use and how the use of this product is distributed in the "population". In order to carry out this task, the authors define two research questions, each of which involving a distinct sample size calculation: 1) What is the proportion of people that use sunscreen in the population?; and, 2) Are there differences in the use of sunscreen between men and women, or between individuals that are white or of another skin color group, or between the wealthiest and the poorest, or between people with more and less years of schooling? Before doing the calculations, it will be necessary to review a few fundamental concepts and identify which are the required parameters to determine them.

WHAT DO WE MEAN, WHEN WE TALK ABOUT POPULATIONS?

First of all, we must define what is a population . Population is the group of individuals restricted to a geographical region (neighborhood, city, state, country, continent etc.), or certain institutions (hospitals, schools, health centers etc.), that is, a set of individuals that have at least one characteristic in common. The target population corresponds to a portion of the previously mentioned population, about which one intends to draw conclusions, that is to say, it is a part of the population whose characteristics are an object of interest of the investigator. Finally, study population is that which will actually be part of the study, which will be evaluated and will allow conclusions to be drawn about the target population, as long as it is representative of the latter. Figure 1 demonstrates how these concepts are interrelated.

An external file that holds a picture, illustration, etc.
Object name is abd-89-04-0609-g01.jpg

Graphic representation of the concepts of population, target population and study population

We will now separately consider the required parameters for sample size calculation in studies that aim at estimating the frequency of events (prevalence of health outcomes or behaviors, for example), to test associations between risk/protective factors and dichotomous health conditions (yes/no), as well as with health outcomes measured in numerical scales. 1 The formulas used for these calculations may be obtained from different sources - we recommend using the free online software OpenEpi ( www.openepi.com ). 2

WHICH PARAMETERS DOES SAMPLE SIZE CALCULATION DEPEND UPON FOR A STUDY THAT AIMS AT ESTIMATING THE FREQUENCY OF HEALTH OUTCOMES, BEHAVIORS OR CONDITIONS?

When approaching the first research question defined at the beginning of this article (What is the proportion of people that use sunscreen?), the investigators need to conduct a prevalence study. In order to do this, some parameters must be defined to calculate the sample size, as demonstrated in chart 1 .

Description of different parameters to be considered in the calculation of sample size for a study aiming at estimating the frequency of health ouctomes, behaviors or conditions

Population sizeTotal population size from which the sample will be drawn and about which researchers will draw conclusions (target population)Information regarding population size may be obtained based on secondary data from hospitals, health centers, census surveys (population, schools etc.).
The smaller the target population (for example, less than 100 individuals), the larger the sample size will proportionally be.
Expected prevalence of outcome or event of interestThe study outcome must be a percentage, that is, a number that varies from 0% to 100%.Information regarding expected prevalence rates should be obtained from the literature or by carrying out a pilot-study.
When this information is not available in the literature or a pilot-study cannot be carried out, the value that maximizes sample size is used (50% for a fixed value of sample error).
Sample error for estimateThe value we are willing to accept as error in the estimate obtained by the study.The smaller the sample error, the larger the sample size and the greater the precision. In health studies, values between two and five percentage points are usually recommended.
Significance levelIt is the probability that the expected prevalence will be within the error margin being established.The higher the confidence level (greater expected precision), the larger will be the sample size. This parameter is usually fixed as 95%.
Design effectIt is necessary when the study participants are chosen by cluster selection procedures. This means that, instead of the participants being individually selected (simple, systematic or stratified sampling), they are first divided and randomly selected in groups (census tracts, neighborhood, households, days of the week, etc.) and later the individuals are selected within these groups. Thus, greater similarity is expected among the respondents within a group than in the general population. This generates loss of precision, which needs to be compensated by a sample size adjustment (increase).The principle is that the total estimated variance may have been reduced as a consequence of cluster selection. The value of the design effect may be obtained from the literature. When not available, a value between 1.5 and 2.0 may be determined and the investigators should evaluate, after the study is completed, the actual design effect and report it in their publications.
The greater the homogeneity within each group (the more similar the respondents are within each cluster), the greater the design effect will be and the larger the sample size required to increase precision. In studies that do not use cluster selection procedures (simple, systematic or stratified sampling), the design effect is considered as null or 1.0.

Chart 2 presents some sample size simulations, according to the outcome prevalence, sample error and the type of target population investigated. The same basic question was used in this table (prevalence of sunscreen use), but considering three different situations (at work, while doing sports or at the beach), as in the study by Duquia et al. conducted in the city of Pelotas, state of Rio Grande do Sul, in 2005. 3

Sample size calculation to estimate the frequency (prevalence) of sunscreen use in the population, considering different scenarios but keeping the significance level (95%) and the design effect (1.0) constant

   
   
Health center users investigated in a single day (population = 100)90 59 96789780
All users in the area covered by a health center (population size = 1,000)464 122 687260707278
All users from the areas covered by all health centers in a city (population size = 10,000)796 137 17943381937370
The entire city population (N = 40.000)847 138 20723472265381

p.p.= percentage points

The calculations show that, by holding the sample error and the significance level constant, the higher the expected prevalence, the larger will be the required sample size. However, when the expected prevalence surpasses 50%, the required sample size progressively diminishes - the sample size for an expected prevalence of 10% is the same as that for an expected prevalence of 90%.

The investigator should also define beforehand the precision level to be accepted for the investigated event (sample error) and the confidence level of this result (usually 95%). Chart 2 demonstrates that, holding the expected prevalence constant, the higher the precision (smaller sample error) and the higher the confidence level (in this case, 95% was considered for all calculations), the larger also will be the required sample size.

Chart 2 also demonstrates that there is a direct relationship between the target population size and the number of individuals to be included in the sample. Nevertheless, when the target population size is sufficiently large, that is, surpasses an arbitrary value (for example, one million individuals), the resulting sample size tends to stabilize. The smaller the target population, the larger the sample will be; in some cases, the sample may even correspond to the total number of individuals from the target population - in these cases, it may be more convenient to study the entire target population, carrying out a census survey, rather than a study based on a sample of the population.

SAMPLE CALCULATION TO TEST THE ASSOCIATION BETWEEN TWO VARIABLES: HYPOTHESES AND TYPES OF ERROR

When the study objective is to investigate whether there are differences in sunscreen use according to sociodemographic characteristics (such as, for example, between men and women), the existence of association between explanatory variables (exposure or independent variables, in this case sociodemographic variables) and a dependent or outcome variable (use of sunscreen) is what is under consideration.

In these cases, we need first to understand what the hypotheses are, as well as the types of error that may result from their acceptance or refutation. A hypothesis is a "supposition arrived at from observation or reflection, that leads to refutable predictions". 4 In other words, it is a statement that may be questioned or tested and that may be falsified in scientific studies.

In scientific studies, there are two types of hypothesis: the null hypothesis (H 0 ) or original supposition that we assume to be true for a given situation, and the alternative hypothesis (H A ) or additional explanation for the same situation, which we believe may replace the original supposition. In the health field, H 0 is frequently defined as the equality or absence of difference in the outcome of interest between the studied groups (for example, sunscreen use is equal in men and women). On the other hand, H A assumes the existence of difference between groups. H A is called two-tailed when it is expected that the difference between the groups will occur in any direction (men using more sunscreen than women or vice-versa). However, if the investigator expects to find that a specific group uses more sunscreen than the other, he will be testing a one-tailed H A .

In the sample investigated by Duquia et al., the frequency of sunscreen use at the beach was greater in men (32.7%) than in women (26.2%).3 Although this what was observed in the sample, that is, men do wear more sunscreen than women, the investigators must decide whether they refute or accept H 0 in the target population (which contends that there is no difference in sunscreen use according to sex). Given that the entire target population is hardly ever investigated to confirm or refute the difference observed in the sample, the authors have to be aware that, independently from their decision (accepting or refuting H 0 ), their conclusion may be wrong, as can be seen in figure 2 .

An external file that holds a picture, illustration, etc.
Object name is abd-89-04-0609-g02.jpg

Types of possible results when performing a hypothesis test

In case the investigators conclude that both in the target population and in the sample sunscreen use is also different between men and women (rejecting H 0 ), they may be making a type I or Alpha error, which is the probability of rejecting H 0 based on sample results when, in the target population, H 0 is true (the difference between men and women regarding sunscreen use found in the sample is not observed in the target population). If the authors conclude that there are no differences between the groups (accepting H 0 ), the investigators may be making a type II or Beta error, which is the probability of accepting H 0 when, in the target population, H 0 is false (that is, H A is true) or, in other words, the probability of stating that the frequency of sunscreen use is equal between the sexes, when it is different in the same groups of the target population.

In order to accept or refute H 0 , the investigators need to previously define which is the maximum probability of type I and II errors that they are willing to incorporate into their results. In general, the type I error is fixed at a maximum value of 5% (0.05 or confidence level of 95%), since the consequences originated from this type of error are considered more harmful. For example, to state that an exposure/intervention affects a health condition, when this does not happen in the target population may bring about behaviors or actions (therapeutic changes, implementation of intervention programs etc.) with adverse consequences in ethical, economic and health terms. In the study conducted by Duquia et al., when the authors contend that the use of sunscreen was different according to sex, the p value presented (<0.001) indicates that the probability of not observing such difference in the target population is less that 0.1% (confidence level >99.9%). 3

Although the type II or Beta error is less harmful, it should also be avoided, since if a study contends that a given exposure/intervention does not affect the outcome, when this effect actually exists in the target population, the consequence may be that a new medication with better therapeutic effects is not administered or that some aspects related to the etiology of the damage are not considered. This is the reason why the value of the type II error is usually fixed at a maximum value of 20% (or 0.20). In publications, this value tends to be mentioned as the power of the study, which is the ability of the test to detect a difference, when in fact it exists in the target population (usually fixed at 80%, as a result of the 1-Beta calculation).

SAMPLE CALCULATION FOR STUDIES THAT AIM AT TESTING THE ASSOCIATION BETWEEN A RISK/PROTECTIVE FACTOR AND AN OUTCOME, EVALUATED DICHOTOMOUSLY

In cases where the exposure variables are dichotomous (intervention/control, man/woman, rich/poor etc.) and so is the outcome (negative/positive outcome, to use sunscreen or not), the required parameters to calculate sample size are those described in chart 3 . According to the previously mentioned example, it would be interesting to know whether sex, skin color, schooling level and income are associated with the use of sunscreen at work, while doing sports and at the beach. Thus, when the four exposure variables are crossed with the three outcomes, there would be 12 different questions to be answered and consequently an equal number of sample size calculations to be performed. Using the information in the article by Duquia et al. 3 for the prevalence of exposures and outcomes, a simulation of sample size calculations was used for each one of these situations ( Chart 4 ).

Type I or Alpha errorIt is the probability of rejecting H0, when H0 is false in the target population. Usually fixed as 5%.It is expressed by the p value. It is usually 5% (p<0.05).
For sample size calculation, the confidence level may be adopted (usually 95%), calculated as 1-Alpha.
The smaller the Alpha error (greater confidence level), the larger will be the sample size.
Statistical Power (1-Beta)It is the ability of the test to detect a difference in the sample, when it exists in the target population.Calculated as 1-Beta.
The greater the power, the larger the required sample size will be.
A value between 80%-90% is usually used.
Relationship between non-exposed/exposed groups in the sampleIt indicates the existing relationship between non-exposed and exposed groups in the sample.For observational studies, the data are usually obtained from the scientific literature. In intervention studies, the value 1:1 is frequently adopted, indicating that half of the individuals will receive the intervention and the other half will be the control or comparison group. Some intervention studies may use a larger number of controls than of individuals receiving the intervention.
The more distant this ratio is from one, the larger will be the required sample size.
Prevalence of outcome in the non-exposed group (percentage of positive among the non-exposed)Proportion of individuals with the disease (outcome) among those non-exposed to the risk factor (or that are part of the control group).Data usually obtained from the literature. When this information is not available but there is information on general prevalence/incidence in the population, this value may be used in sample size calculation (values attributed to the control group in intervention studies) or estimated based on the following formula: PONE=pO/(pNE+(pE*PR) )
  where pO = prevalence of outcome; pNE = percentage of non-exposed; pE = percentage of exposed; PR = prevalence ratio (usually a value between 1.5 and 2.0).
Expected prevalence ratioRelationship between the prevalence of disease in the exposed (intervention) group and the prevalence of disease in the non-exposed group, indicating how many times it is expected that the prevalence will be higher (or lower) in the exposed compared to non-exposed group.It is the value that the investigators intend to find as HA, with the corresponding H0 equal to one (similar prevalence of the outcome in both exposed and non-exposed groups). For the sample size estimates, the expected outcome prevalence may be used for the non-exposed group, or the expected difference in the prevalence between the exposed and the non-exposed groups.
Usually, a value between 1.50 and 2.00 is used (exposure as risk factor) or between 0.50 and 0.75 (protective factor).
For intervention studies, the clinical relevance of this value should be considered.
The smaller the prevalence rate (the smaller the expected difference between the groups), the larger the required sample size.
Type of statistical testThe test may be one-tailed or two-tailed, depending on the type of the HA.Two-tailed tests require larger sample sizes

Ho - null hypothesis; Ha - alternative hypothesis

      
      
      
Female: 56%(E) n=1298n=388 n=487n=134 n=136n=28
Male:44%(NE) n=1738n=519 n=652n=179 n=181n=38
         
      
White: 82%(E) n=2630n=822 n=970n=276 n=275n=49
Other: 18%(NE) n=3520n=1100 n=1299n=370 n=368n=66
         
      
0-4 years: 25%(E) n=1340n=366 n=488n=131 n=138ND
>4 anos: 75%(NE) n=1795n=490 n=654n=175 n=184ND
         
      
≤133: 50%(E) n=1228n=360 n=458n=124 n=128n=28
>133: 50%(NE) n=1644n=480 n=612n=166 n=170n=36
         

E=exposed group; NE=non-exposed group; r=NE/E relationship; PONE=prevalence of outcome in the non-exposed group (percentage of positives in non-exposed group), estimated based on formula from chart 3 , considering an PR of 1.50; PR=prevalence ratio/incidence or expected relative risk; n= minimum necessary sample size; ND=value could not be determined, as prevalence of outcome in the exposed would be above 100%, according to specified parameters.

Estimates show that studies with more power or that intend to find a difference of a lower magnitude in the frequency of the outcome (in this case, the prevalence rates) between exposed and non-exposed groups require larger sample sizes. For these reasons, in sample size calculations, an effect measure between 1.5 and 2.0 (for risk factors) or between 0.50 and 0.75 (for protective factors), and an 80% power are frequently used.

Considering the values in each column of chart 3 , we may conclude also that, when the nonexposed/exposed relationship moves away from one (similar proportions of exposed and non-exposed individuals in the sample), the sample size increases. For this reason, intervention studies usually work with the same proportion of individuals in the intervention and control groups. Upon analysis of the values on each line, it can be concluded that there is an inverse relationship between the prevalence of the outcome and the required sample size.

Based on these estimates, assuming that the authors intended to test all of these associations, it would be necessary to choose the largest estimated sample size (2,630 subjects). In case the required sample size is larger than the target population, the investigators may decide to perform a multicenter study, lengthen the period for data collection, modify the research question or face the possibility of not having sufficient power to draw valid conclusions.

Additional aspects need to be considered in the previous estimates to arrive at the final sample size, which may include the possibility of refusals and/or losses in the study (an additional 10-15%), the need for adjustments for confounding factors (an additional 10-20%, applicable to observational studies), the possibility of effect modification (which implies an analysis of subgroups and the need to duplicate or triplicate the sample size), as well as the existence of design effects (multiplication of sample size by 1.5 to 2.0) in case of cluster sampling.

SAMPLE CALCULATIONS FOR STUDIES THAT AIM AT TESTING THE ASSOCIATION BETWEEN A DICHOTOMOUS EXPOSURE AND A NUMERICAL OUTCOME

Suppose that the investigators intend to evaluate whether the daily quantity of sunscreen used (in grams), the time of daily exposure to sunlight (in minutes) or a laboratory parameter (such as vitamin D levels) differ according to the socio-demographic variables mentioned. In all of these cases, the outcomes are numerical variables (discrete or continuous) 1 , and the objective is to answer whether the mean outcome in the exposed/intervention group is different from the non-exposed/control group.

In this case, the first three parameters from chart 4 (alpha error, power of the study and relationship between non-exposed/exposed groups) are required, and the conclusions about their influences on the final sample size are also applicable. In addition to defining the expected outcome means in each group or the expected mean difference between nonexposed/exposed groups (usually at least 15% of the mean value in non-exposed group), they also need to define the standard deviation value for each group. There is a direct relationship between the standard deviation value and the sample size, the reason why in case of asymmetric variables the sample size would be overestimated. In such cases, the option may be to estimate sample sizes based on specific calculations for asymmetric variables, or the investigators may choose to use a percentage of the median value (for example, 25%) as a substitute for the standard deviation.

SAMPLE SIZE CALCULATIONS FOR OTHER TYPES OF STUDY

There are also specific calculations for some other quantitative studies, such as those aiming to assess correlations (exposure and outcome are numerical variables), time until the event (death, cure, relapse etc.) or the validity of diagnostic tests, but they are not described in this article, given that they were discussed elsewhere. 5

Sample size calculation is always an essential step during the planning of scientific studies. An insufficient or small sample size may not be able to demonstrate the desired difference, or estimate the frequency of the event of interest with acceptable precision. A very large sample may add to the complexity of the study, and its associated costs, rendering it unfeasible. Both situations are ethically unacceptable and should be avoided by the investigator.

Conflict of Interest: None

Financial Support: None

* Work carried out at the Latin American Cooperative Oncology Group (LACOG), Universidade Federal de Santa Catarina (UFSC), and Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Brazil.

Como citar este artigo: Martínez-Mesa J, González-Chica DA, Bastos JL, Bonamigo RR, Duquia RP. Sample size: how many participants do I need in my research? An Bras Dermatol. 2014;89(4):609-15.

Quantitative vs qualitative research—what’s the difference?

Your complete guide to quantitative vs qualitative research, and how to best use each for better business outcomes.

What is qualitative research?

What is quantitative research, why do quantitative research, why do qualitative research, the pros and cons of quantitative research, the pros and cons of qualitative research, how to do quantitative research, how to do qualitative research, how to analyse quantitative research data, how to analyse qualitative research data, in conclusion.

What’s the difference between quantitative vs qualitative research? Are you thinking about launching a new product or service, or developing new features for an existing one? Market research is the essential first move for brands, providing valuable information to guide the process and provide the highest likelihood of success.

There are two main types of research that marketers should engage in to effectively profile their customer base—quantitative and qualitative market research . You’ve probably heard the terms before, but do you know what they mean, or most importantly, when and where it’s best to use each type?

New to market research? Watch our intro to qualitative vs. quantitative market research below.

And read on to understand the two research methods, the type of data each produces and how you can use that to build and test concepts effectively. Good market research is always worth investing in, whether you’re a startup just beginning your journey or a bigger company competing in a wide playing field.

Qualitative research seeks more in-depth, free form answers from respondents either in person or via open-test responses.

This type of research is usually carried out with small groups and takes the form of in-person focus groups, telephone interviews or detailed surveys with free text responses. The method is used to gather anecdotal views and opinions, which inform generally rather than offer hard data.   

Quantitative research, as the name suggests, is primarily about numbers. It generally involves surveying a large group of people (usually at least several hundred and often thousands), using a structured questionnaire that contains predominantly closed-ended, or forced-choice, questions.

This is so that findings may be expressed numerically, enabling companies to garner statistics upon which plans and predictions can be made.

Quantitative research enables brands to profile a target audience by measuring what proportion has certain behaviours, behavioural intentions, attitudes, and knowledge. Learn more about how to create an ideal customer profile using consumer insights.

In the planning stages for a new product or service, the quantitative method can help establish the importance of specific customer needs and validate the best product concept.

It can also be used as a deductive process to test pre-specified concepts and theories, such as, “working mothers are time-poor and find cooking a healthy meal for their family every evening a challenge.”

Quantitative research can help you answer questions such as “how many” and “how often” and is invaluable when putting together a business case before launching a new product or service, or proposing changes to existing ones.

The statistically robust results that can be derived from quantitative research are good for estimating the probability of success.

As well as helping you validate the marketplace and demand for your particular product or service, quant surveys can be used to shape your market proposition and gain understanding of how to market to your target audience. You can also run quantitative research into your competition to make sure you fully understand where you fit into your category.

You can garner data to determine things such as the best price point or places to advertise by looking at respondents’ price sensitivity or media usage.

But quantitative research is not just for the planning stage of your product or service; you can employ it further down the line to test customer satisfaction or assess the proportion of a target audience that recalls a message, for example.

Numerical (quantitative) research can measure behaviours, but it can’t necessarily tell you why customers behave as they do (or how to change that behaviour). That’s where qualitative research comes in; providing brands a more in-depth look into their customers’ psyches, with feedback right from the horse’s mouth. It helps to answer ‘why?’

It’s best used for more deeply exploring a topic or idea, when you want unprompted and unbound input rather than set answers to structured questions. Qualitative research is a primarily inductive process used to formulate theory rather than test existing ones. It helps brands to gain an insight into a target audience’s lifestyle, culture, preferences and motivations.

Like quantitative research, it can help identify customer needs . The results will be much more subjective but can be used to shape quantitative surveys that will validate the findings.

For example you may ask an open ended question ‘what is most important to you when it comes to dining out?,’ and then take the most common free-text answers, and validate them with a larger number of consumers using a quantitative survey, with fixed choice options based on the answers you got in your preliminary qual research.

You can also employ the two methods in the opposite direction – using quantitative research to gain statistics on behaviour or beliefs, and then qualitative to discover the reasons behind those behaviours or beliefs. It helps brands to better understand the context of the data.

Qualitative research can be very useful when it comes to developing brand image and marketing campaigns, since you can capture the language and imagery customers use to describe and relate to products and services in their own words.

Likewise, you can understand how people perceive a marketing message or communication piece and get their reactions to graphic identity or packaging designs.

Because qualitative research is conducted among smaller groups it’s ideal for exploring different market segments, as well as getting input from key informants who may be outside your target audience (such as industry experts).

does qualitative research requires a large number of respondents

Power your startup with quant and qual market research

Read our guide to market research for startups to unlock top strategies for long-term success with your target market.

  • Objectivity: quantitative research is numerical. Therefore, the results are clear and are harder to misinterpret. The survey can also be easily repeated and you can reliably track changes over time.
  • Easy to analyse: because responses are numeric you can use statistical analysis to gain additional insight from the data.
  • Quick: because you’re asking closed questions, it usually means data can be collected more quickly (because it’s easier for people to answer), while digital tools such as Attest  can be used to easily analyse the results.
  • Ability to generalise: when the survey involves a statistically valid random sample, you can generalise your findings beyond your participant group and make decisions with confidence.
  • Big sample needed: quantitative research requires a large sample of the population to deliver reliable results. The larger the sample of people, the more statistically accurate the outputs will be.
  • Limited answers: because results of quantitative research must be numeric, free text responses can not be permitted, meaning contextual detail may be missing.
  • Potential for bias: those willing to respond to surveys may share characteristics that don’t apply to the audience as a whole, creating a potential bias in the study.
  • Wording is crucial: to be confident in the results of quant surveys, you have to be confident you’re asking the right questions, in the right way, with the correct answer-options included.
  • More detailed: qualitative research offers a deeper understanding, with the ability to explore topics in more detail.
  • Unprompted feedback: open-ended questions facilitate unprompted responses, vital for testing things where you don’t want to bias the outcome with prompts (such as for unprompted brand recall).
  • Taps consumer creativity: generate ideas for improvements and/or extensions of a product, line, or brand.
  • Smaller sample needed: you don’t need to recruit as many participants.
  • Less measurable: with free text answers, it’s more difficult to quantify how many of your audience answer one way or another, and the data set is less accessible for statistical interrogation.
  • Can’t generalise: qualitative research does not give statistically robust findings, and you therefore cannot generalise to your broader audience – although if followed up with quant research this is easy to remedy.
  • Not repeatable: freeform interviewing makes it difficult to track changes over time.

When you design a quantitative research survey all questions must be closed-ended, with pre-defined answers. These can take a variety of forms:

  • Dichotomous – “yes/no”
  • Multiple-choice – select one or more options from a list
  • Rank order scaling – reorder a list by, for example, order of importance or preference
  • Rating scale – select a rating such as “satisfied” or “extremely satisfied”
  • Semantic differential scale – select a number on a scale (i.e. 1-10)

Because you want results to be easily measurable, you need to think carefully about the answer options to make them as inclusive as possible and thus minimise the amount of respondents who will select “other” (but do be sure to include “other” or “don’t know” as an option).

Avoid loaded questions, which make assumptions that might not be relevant to all being surveyed, such as, “When you buy hair gel, is packaging important to you?” with “yes/no” as answer options – it may be that they don’t purchase hair gel at all and would be unable to answer truthfully. This could lead to abandoned surveys or skewed results.   

Although qualitative research is less structured than quantitative, it’s still necessary to plan the topics that will be discussed and what information you aim to glean.

You should develop a set of clear and specific questions, otherwise the input will be too unmanageable. For example, asking a group of horse riders to tell you their biggest frustration in regards to their hobby is too broad a question.

Participants will struggle to answer and the researcher will struggle to draw meaningful data. Work instead on narrowing it down to, for example, their biggest frustrations with grooming or with feeding.

Design your questions so they are open-ended and cannot be answered with a simple “yes or no” – the point of qualitative research is obtain more in-depth understanding. Open-ended questions might start:

  • What do you think about…

Generally, you’re aiming for more than a one-word answer; you want to probe the thoughts, beliefs and emotions of the participants. This will help you understand their behaviours.

Qualitative research is also useful for obtaining unprompted recall, so you might ask participants to think of a brand they’ve seen advertised on the TV recently and name it.

Qualitative research is not restricted to in-person interviews; it can be carried out via digital survey by using free-text responses.

Get qualitative and quantitative insights from market research tools

For real customer insights, you need to use the right tools—that’s why we created a list of top market research software.

Surveying tools should come with a range of options to help you work with the data, such as cross-tabbing and filters which enable you to observe answers by demographic combinations (variables). You can also export data to Excel where you can use features such as pivot tables and descriptive statistics .

There are three core types of analysis:

  • Univariate – analyse by one variable, such as gender
  • Bivariate – analyse by two variables, such as gender and age
  • Multivariate – analyse by several variables, such as gender, age and education

To help you visualise the results, you can use data visualisation tools which take your data and put it into graphs and charts…or you could simply use Attest! Meanwhile, you can utilise Excel’s Prediction Calculator tool to create a scorecard that can be used to evaluate options or risk (probability).

Qualitative research results cannot be analysed in the same way as quantitative data or expressed as percentages; rather the output should be thought of as themes.

You can organise the results using coding. In coding, you assign a word, phrase, or number to each category, such as “pricing” or “barriers to entry”. You then go through all of your data in a systematic way and “code” ideas, concepts and themes as they fit categories.

Another way to get a feel for the overall themes is to use a basic text analysis tool , which allows you to find the most frequent phrases and frequencies of words. Or use more sophisticated software to mine text for themes, alongside analysing for sentiment and subjectivity.

To see keywords visually depicted, use a wordcloud generator – simply paste text or upload a document to generate a graphic which illustrates the frequency of words by giving them more or less prominence in the design.

Market research has a dozen uses, from helping you calculate market size for new product development to helping your team understand customer needs. Quantitative and qualitative research both have their place a valuable tools for market research , and a mix of both should be carried out whenever you’re extending product lines or launching something new.

Both methods can work hand-in-hand; brands can use qualitative research for developing concepts and theories, and quantitative for testing pre-existing ones.

You can also use free-form qualitative research to guide the creation of more structured qualitative surveys. And following quantitative surveys, turn to qualitative to better understand the context of the responses!

Get started with our market analysis survey template

Our flexible market analysis template makes gathering consumer data simple with easy-to-digest insights.

does qualitative research requires a large number of respondents

Senior Content Writer 

Bel has a background in newspaper and magazine journalism but loves to geek-out with Attest consumer data to write in-depth reports. Inherently nosy, she's endlessly excited to pose questions to Attest's audience of 125 million global consumers. She also likes cake.

Related articles

5 revealing new insights to help brands connect with consumers, 10 fast facts on uk food & drink consumer trends, food & beverage, 5 survey topics that challenger brands need to focus on, subscribe to our newsletter.

Fill in your email and we’ll drop fresh insights and events info into your inbox each week.

* I agree to receive communications from Attest. Privacy Policy .

You're now subscribed to our mailing list to receive exciting news, reports, and other updates!

IMAGES

  1. Qualitative Research Methods

    does qualitative research requires a large number of respondents

  2. Qualitative Research Method: Meaning and Types

    does qualitative research requires a large number of respondents

  3. Qualitative research methods

    does qualitative research requires a large number of respondents

  4. Qualitative Research Examples

    does qualitative research requires a large number of respondents

  5. Qualitative Research

    does qualitative research requires a large number of respondents

  6. Top 10 Qualitative Research Report Templates with Samples and Examples

    does qualitative research requires a large number of respondents

COMMENTS

  1. How many participants do I need for qualitative research?

    The answer lies somewhere in between. It's often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you're new to user research or you ...

  2. Qualitative Research Part II: Participants, Analysis, and Quality

    This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques.

  3. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...

  4. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  5. What's in a Number? Understanding the Right Sample Size for Qualitative

    Between 15-30. Based on research conducted on this issue, if you are building similar segments within the population, InterQ's recommendation for in-depth interviews is to have a sample size of 15-30. In some cases, a minimum of 10 is sufficient, assuming there has been integrity in the recruiting process. With the goal to maintain a rigorous ...

  6. Series: Practical guidance to qualitative research. Part 3: Sampling

    This article is the third paper in a series of four articles aiming to provide practical guidance to qualitative research. In an introductory paper, we have described the objective, nature and outline of the Series . Part 2 of the series focused on context, research questions and design of qualitative research . In this paper, Part 3, we ...

  7. (PDF) How many participants are necessary for a qualitative study

    This article presents guidelines for determining and justifying sample size in qualitative research. It defends that (a) the increase of size is not, per se, an advantage and that (b) the answer ...

  8. Chapter 1. Introduction

    The survey gave me a large enough number of students that I could make comparisons of the how many kind, ... they may be the respondents to the interviewer; ... Because qualitative research often requires interaction with live humans, failing to take into account how one's presence and prior expectations and social location affect the data ...

  9. Characterising and justifying sample size sufficiency in interview

    Background Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless ...

  10. Sample Size Policy for Qualitative Studies Using In-Depth Interviews

    The sample size used in qualitative research methods is often smaller than that used in quantitative research methods. This is because qualitative research methods are often concerned with garnering an in-depth understanding of a phenomenon or are focused on meaning (and heterogeneities in meaning)—which are often centered on the how and why of a particular issue, process, situation ...

  11. Qualitative Research: Getting Started

    Qualitative research was historically employed in fields such as sociology, history, and anthropology. 2 Miles and Huberman 2 said that qualitative data "are a source of well-grounded, rich descriptions and explanations of processes in identifiable local contexts. With qualitative data one can preserve chronological flow, see precisely which ...

  12. PDF Determining the Sample in Qualitative Research

    designing their qualitative research projects.Sampling and sample size debate in qualitative research is one of the major components that is not em. hasised enough in literature (Robinson, 2014). There is no rule of thumb or straightforward guidelines for determining the number of participants in qualitative studies (Patton, 2015), rather.

  13. Sample sizes for saturation in qualitative research: A systematic

    These research objectives are typical of much qualitative heath research. The sample size of the datasets used varied from 14 to 132 interviews and 1 to 40 focus groups. All datasets except one ( Francis et al., 2010 ) had a sample that was much larger than the sample ultimately needed for saturation, making them effective for assessing saturation.

  14. Sampling

    How many participants you include in your study will vary based on your research design, research question, and sampling approach . Further reading: Babbie, E. (2008). The basics of social research (4th ed). Belmont: Thomson Wadsworth. Creswell, J.W. & Creswell, J.D. (2018). Research design: Qualitative, quantitative and mixed methods ...

  15. Qualitative vs Quantitative Research 101

    This is an important cornerstone of the scientific method. Quantitative research can be pretty fast. The method of data collection is faster on average: for instance, a quantitative survey is far quicker for the subject than a qualitative interview. The method of data analysis is also faster on average.

  16. What is the ideal Sample Size in Qualitative Research?

    Based on studies that have been done in academia on this very issue, 30 seems to be an ideal sample size for the most comprehensive view, but studies can have as little as 10 total participants and still yield extremely fruitful, and applicable, results. (This goes back to excellence in recruiting.) Our general recommendation for in-depth ...

  17. What is Qualitative in Qualitative Research

    Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research.

  18. Series: Practical guidance to qualitative research. Part 3: Sampling

    This article is the third paper in a series of four articles aiming to provide practical guidance to qualitative research. In an introductory paper, we have described the objective, nature and outline of the Series [Citation 1]. Part 2 of the series focused on context, research questions and design of qualitative research [Citation 2]. In this ...

  19. What is Qualitative in Qualitative Research

    What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being "qualitative," the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term "qualitative." Then, drawing on ideas we find scattered ...

  20. How many participants do I need in my qualitative research?

    Over time some researchers say that you about 15-25 research participants will already enable us to reach the point of saturation. But then again, the rule is that it is not an exact science when ...

  21. How many respondents do i need for a qualitative research that aims to

    In qualitative research you are interested in an in-depth understanding of a phenomena, once your data gives you that understanding there is no point to continue collecting data. My view is that ...

  22. Sample size: how many participants do I need in my research?

    It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.

  23. Quantitative vs qualitative research—what's the difference?

    Objectivity: quantitative research is numerical. Therefore, the results are clear and are harder to misinterpret. The survey can also be easily repeated and you can reliably track changes over time. Easy to analyse: because responses are numeric you can use statistical analysis to gain additional insight from the data.