No internet connection.
All search filters on the page have been cleared., your search has been saved..
- Sign in to my profile My Profile
Subject index
There is no longer any question that qualitative inquiry is fundamental to the enterprise of social science research, with a broad reach and a history all its own. This book seeks to introduce–to reintroduce–readers to selections that provide a solid intellectual grounding in the area of qualitative research. Thoughtfully and painstakingly culled from over a thousand candidate articles by co-editors. Michael Huberman and the late Matthew B. Miles (co-authors of the seminal Qualitative Data Analysis), The Qualitative Researcher's Companion examines the theoretical underpinnings, methodological perspectives, and empirical approaches that are crucial to the understanding and practice of qualitative inquiry. Incisive, provocative, and drawn from across the many disciplines that employ qualitative inquiry, The Qualitative Researcher's Companion is a key addition to the bookshelf of anyone involved in the research act.
Building Theories From Case Study Research
- In Part I: Theories and Analysis
- By: Kathleen M. Eisenhardt
- In: The Qualitative Researcher's Companion
- Chapter DOI: https:// doi. org/10.4135/9781412986274.n1
- Subject: Anthropology , Business and Management , Criminology and Criminal Justice , Communication and Media Studies , Counseling and Psychotherapy , Economics , Education , Geography , Health , History , Marketing , Nursing , Political Science and International Relations , Psychology , Social Policy and Public Policy , Social Work , Sociology
- Keywords: conflict ; emergent theory ; organizations ; population ; strategic decision making ; teams
- Show page numbers Hide page numbers
Development of theory is a central activity in organizational research. Traditionally, authors have developed theory by combining observations from previous literature, common sense, and experience. However, the tie to actual data has often been tenuous (Perrow, 1986; Pfeffer, 1982). Yet, as Glaser and Strauss (1967) argue, it is the intimate connection with empirical reality that permits the development of a testable, relevant, and valid theory.
AUTHOR'S NOTE: I appreciate the helpful comments of Paul Adler, Kenneth Bettenhausen, Constance Gersick, James Frederickson, James Jucker, Deborah Myerson, Dorothy Leonard-Barton, Robert Sutton, and the participants in the Stanford NIMH Colloquium. I also benefited from informal conversations with many participants at the National Science Foundation Conference on Longitudinal Research Methods in Organizations, Austin, 1988.
Reprinted from Kathleen M. Eisenhardt, “Building Theories From Case Study Research,” Academy of Management Review , 14(4), 532–550. Copyright 1989 by the Academy of Management. Reproduced with permission of the Academy of Management in the format Textbook via Copyright Clearance Center.
This paper describes building theories from case studies. Several aspects of this process are discussed in the literature. For example, Glaser and Strauss (1967) detailed a comparative method for developing grounded theory, Yin (1981, 1984) described the design of case study research, and Miles and Huberman (1984) codified a series of procedures for analyzing qualitative data. However, confusion surrounds the distinctions among qualitative data, inductive logic, and case study research. Also, there is a lack of clarity about the process of actually building theory from cases, especially regarding the central inductive process and the role of literature. Glaser and Strauss (1967) and more recently Strauss (1987) have outlined pieces of the process, but theirs is a prescribed formula, and new ideas have emerged from methodologists (e.g., Yin, 1984; Miles & Huberman, 1984) and researchers conducting this type of research (e.g., Gersick, 1988; Harris & Sutton, 1986; Eisenhardt & Bourgeois, 1988). Also, it appears that no one has explicitly examined when this theory-building approach is likely to be fruitful and what its strengths and weaknesses may be.
This paper attempts to make two contributions to the literature. The first is a road map for building theories from case study research. This road map synthesizes previous work on qualitative methods (e.g., Miles & Huberman, 1984), the design of case study research (e.g., Yin, 1981, 1984), and grounded theory building (e.g., Glaser & Strauss, 1967) and extends that work in areas such as a priori specification of constructs, triangulation of multiple investigators, within-case and cross-case analyses, and the role of existing literature. The result is a more nearly complete road map for executing this type of research than has existed in the past. This framework is summarized in Table 1.1.
The second contribution is positioning theory building from case studies into the larger context of social science research. For example, the paper explores strengths and weaknesses of theory building from case studies, situations in which it is an attractive research approach, and some guidelines for evaluating this type of research.
Several pieces of the process of building theory from case study research have appeared in the literature. One is the work on grounded [Page 7] theory building by Glaser and Strauss (1967) and, more recently, Strauss (1987). These authors have detailed their comparative method for developing grounded theory. The method relies on continuous comparison of data and theory beginning with data collection. It emphasizes both the emergence of theoretical categories solely from evidence and an incremental approach to case selection and data gathering.
More recently, Yin (1981, 1984) has described the design of case study research. He has defined the case study as a research strategy, developed a typology of case study designs, and described the replication logic which is essential to multiple case analysis. His approach also stresses bringing the concerns of validity and reliability in experimental research design to the design of case study research.
Miles and Huberman (1984) have outlined specific techniques for analyzing qualitative data. Their ideas include a variety of devices such as tabular displays and graphs to manage and present qualitative data, without destroying the meaning of the data through intensive coding.
A number of active researchers also have undertaken their own variations and additions to the earlier methodological work (e.g., Gersick, 1988; Leonard-Barton, 1988; Harris & Sutton, 1986). Many of these authors acknowledge a debt to previous work, but they have also developed their own “homegrown” techniques for building theory from cases. For example, Sutton and Callahan (1987) pioneered a clever use of a resident devil's advocate, the Warwick group (Pettigrew, 1988) added triangulation of investigators, and my colleague and I (Bourgeois & Eisenhardt, 1988) developed cross-case analysis techniques.
Finally, the work of others such as Van Maanen (1988) on ethnography, Jick (1979) on triangulation of data types, and Mintzberg (1979) on direct research has provided additional pieces for a framework of building theory from case study research.
As a result, many pieces of the theory-building process are evident in the literature. Nevertheless, at the same time, there is substantial confusion about how to combine them, when to conduct this type of study, and how to evaluate it.
The Case Study Approach
The case study is a research strategy which focuses on understanding the dynamics present within single settings. Examples of case study [Page 9] research include Selznick's (1949) description of TVA, Allison's (1971) study of the Cuban missile crisis, and Pettigrew's (1973) research on decision making at a British retailer. Case studies can involve either single or multiple cases, and numerous levels of analysis (Yin, 1984). For example, Harris and Sutton (1986) studied 8 dying organizations, Bettenhausen and Murnighan (1986) focused on the emergence of norms in 19 laboratory groups, and Leonard-Barton (1988) tracked the progress of 10 innovation projects. Moreover, case studies can employ an embedded design, that is, multiple levels of analysis within a single study (Yin, 1984). For example, the Warwick study of competitiveness and strategic change within major U.K. corporations is conducted at two levels of analysis: industry and firm (Pettigrew, 1988), and the Mintzberg and Waters (1982) study of Steinberg's grocery empire examines multiple strategic changes within a single firm.
Case studies typically combine data collection methods such as archives, interviews, questionnaires, and observations. The evidence may be qualitative (e.g., words), quantitative (e.g., numbers), or both. For example, Sutton and Callahan (1987) rely exclusively on qualitative data in their study of bankruptcy in Silicon Valley, Mintzberg and McHugh (1985) use qualitative data supplemented by frequency counts in their work on the National Film Board of Canada, and Eisenhardt and Bourgeois (1988) combine quantitative data from questionnaires with qualitative evidence from interviews and observations.
Finally, case studies can be used to accomplish various aims: to provide description (Kidder, 1982), test theory (Pinfield, 1986; Anderson, 1983), or generate theory (e.g., Gersick, 1988; Harris & Sutton, 1986). The interest here is in this last aim, theory generation from case study evidence. Table 1.2 summarizes some recent research using theory building from case studies.
Building Theory from Case Study Research
Getting started.
An initial definition of the research question, in at least broad terms, is important in building theory from case studies. Mintzberg (1979, p. 585) noted: “No matter how small our sample or what our interest, we have always tried to go into organizations with a well-defined [Page 10] focus—to collect specific kinds of data systematically.” The rationale for defining the research question is the same as it is in hypothesis-testing research. Without a research focus, it is easy to become overwhelmed by the volume of data. For example, Pettigrew and colleagues (1988) defined their research question in terms of strategic change and competitiveness within large British corporations, and Leonard-Barton (1988) focused on technical innovation of feasible technologies. Such definition of a research question within a broad topic permitted these investigators to specify the kind of organization to be approached, and, once there, the kind of data to be gathered.
A priori specification of constructs can also help to shape the initial design of theory-building research. Although this type of specification is not common in theory-building studies to date, it is valuable because it permits researchers to measure constructs more accurately. If these constructs prove important as the study progresses, then researchers have a firmer empirical grounding for the emergent theory. For example, in a study of strategic decision making in top management teams, Bourgeois and Eisenhardt (1988) identified several potentially important constructs (e.g., conflict, power) from the literature on decision making. These constructs were explicitly measured in the interview protocol and questionnaires. When several of these constructs did emerge as related to the decision process, there were strong, triangulated measures on which to ground the emergent theory.
Although early identification of the research question and possible constructs is helpful, it is equally important to recognize that both are tentative in this type of research. No construct is guaranteed a place in the resultant theory, no matter how well it is measured. Also, the research question may shift during the research. At the extreme, some researchers (e.g., Gersick, 1988; Bettenhausen & Murnighan, 1986) have converted theory-testing research into theory-building research by taking advantage of serendipitous findings. In these studies, the research focus emerged after the data collection had begun. As Bettenhausen and Murnighan (1986, p. 352) wrote: “ … we observed the outcomes of an experiment on group decision making and coalition formation. Our observations of the groups indicated that the unique character of each of the groups seemed to overwhelm our other manipulations.” These authors proceeded to switch their research focus to a theory-building study of group norms.
Finally and most importantly, theory-building research is begun as close as possible to the ideal of no theory under consideration and no hypotheses to test. Admittedly, it is impossible to achieve this ideal of a clean theoretical slate. Nonetheless, attempting to approach this ideal is important because preordained theoretical perspectives or propositions may bias and limit the findings. Thus, investigators should formulate a research problem and possibly specify some potentially important variables, with some reference to extant literature. However, they should avoid thinking about specific relationships between variables and theories as much as possible, especially at the outset of the process.
Selecting Cases
Selection of cases is an important aspect of building theory from case studies. As in hypothesis-testing research, the concept of a population is crucial, because the population defines the set of entities from which the research sample is to be drawn. Also, selection of an appropriate population controls extraneous variation and helps to define the limits for generalizing the findings.
The Warwick study of strategic change and competitiveness illustrates these ideas (Pettigrew, 1988). In this study, the researchers selected cases from a population of large British corporations in four market sectors. The selection of four specific markets allowed the researchers to control environmental variation, while the focus on large corporations constrained variation due to size differences among the firms. Thus, specification of this population reduced extraneous variation and clarified the domain of the findings as large corporations operating in specific types of environments.
However, the sampling of cases from the chosen population is unusual when building theory from case studies. Such research relies on theoretical sampling (i.e., cases are chosen for theoretical, not statistical, reasons; Glaser & Strauss, 1967). The cases may be chosen to replicate previous cases or extend emergent theory, or they may be chosen to fill theoretical categories and provide examples of polar types. While the cases may be chosen randomly, random selection is neither necessary nor even preferable. As Pettigrew (1988) noted, given the limited number of cases which can usually be studied, it makes sense to choose cases such as extreme situations and polar types in which the process [Page 13] of interest is “transparently observable.” Thus, the goal of theoretical sampling is to choose cases which are likely to replicate or extend the emergent theory. In contrast, traditional, within-experiment hypothesis-testing studies rely on statistical sampling, in which researchers randomly select the sample from the population. In this type of study, the goal of the sampling process is to obtain accurate statistical evidence on the distributions of variables within the population.
Several studies illustrate theoretical sampling. Harris and Sutton (1986), for example, were interested in the parting ceremonies of dying organizations. In order to build a model applicable across organization types, these researchers purposefully selected diverse organizations from a population of dying organizations. They chose eight organizations, filling each of four categories: private, dependent; private, independent; public, dependent; and public, independent. The sample was not random, but reflected the selection of specific cases to extend the theory to a broad range of organizations. Multiple cases within each category allowed findings to be replicated within categories. Gersick (1988) followed a similar strategy of diverse sampling in order to enhance the generalizability of her model of group development. In the Warwick study (Pettigrew, 1988), the investigators also followed a deliberate, theoretical sampling plan. Within each of four markets, they chose polar types: one case of clearly successful firm performance and one unsuccessful case. This sampling plan was designed to build theories of success and failure. Finally, the Eisenhardt and Bourgeois (1988) study of the politics of strategic decision making illustrates theoretical sampling during the course of research. A theory linking the centralization of power to the use of politics in top management teams was built and then extended to consider the effects of changing team composition by adding two cases, in which the executive teams changed, to the first six, in which there was no change. This tactic allowed the initial framework to be extended to include dynamic effects of changing team composition.
Crafting Instruments and Protocols
Theory-building researchers typically combine multiple data collection methods. While interviews, observations, and archival sources are particularly common, inductive researchers are not confined to these choices. Some investigators employ only some of these data [Page 14] collection methods (e.g., Gersick, 1988, used only observations for the first half of her study), or they may add others (e.g., Bettenhausen & Murnighan, 1986, used quantitative laboratory data). The rationale is the same as in hypothesis-testing research. That is, the triangulation made possible by multiple data collection methods provides stronger substantiation of constructs and hypotheses.
Of special note is the combining of qualitative with quantitative evidence. Although the terms qualitative and case study are often used interchangeably (e.g., Yin, 1981), case study research can involve qualitative data only, quantitative only, or both (Yin, 1984). Moreover, the combination of data types can be highly synergistic. Quantitative evidence can indicate relationships which may not be salient to the researcher. It also can keep researchers from being carried away by vivid, but false, impressions in qualitative data, and it can bolster findings when it corroborates those findings from qualitative evidence. The qualitative data are useful for understanding the rationale or theory underlying relationships revealed in the quantitative data or may suggest directly theory which can then be strengthened by quantitative support (Jick, 1979). Mintzberg (1979) described this synergy as follows:
For while systematic data create the foundation for our theories, it is the anecdotal data that enable us to do the building. Theory building seems to require rich description, the richness that comes from anecdote. We uncover all kinds of relationships in our hard data, but it is only through the use of this soft data that we are able to explain them. (p. 587)
Also, of special note is the use of multiple investigators. Multiple investigators have two key advantages. First, they enhance the creative potential of the study. Team members often have complementary insights which add to the richness of the data, and their different perspectives increase the likelihood of capitalizing on any novel insights which may be in the data. Second, the convergence of observations from multiple investigators enhances confidence in the findings. Convergent perceptions add to the empirical grounding of the hypotheses, while conflicting perceptions keep the group from premature closure. Thus, the use of more investigators builds confidence in the findings and increases the likelihood of surprising findings.
One strategy for employing multiple investigators is to make the visits to case study sites in teams (e.g., Pettigrew, 1988). This allows the [Page 15] case to be viewed from the different perspectives of multiple observers. A variation on this tactic is to give individuals on the team unique roles, which increases the chances that investigators will view case evidence in divergent ways. For example, interviews can be conducted by two person teams, with one researcher handling the interview questions, while the other records notes and observations (e.g., Eisenhardt & Bourgeois, 1988). The interviewer has the perspective of personal interaction with the informant, while the notetaker retains a different, more distant view. Another tactic is to create multiple research teams, with teams being assigned; to cover some case sites, but not others (e.g., Pettigrew, 1988). The rationale behind this tactic is that investigators who have not met the informants and have not become immersed in case details may bring a very different and possibly more objective eye to the evidence. An extreme form of this tactic is to keep some member or members of the research team out of the field altogether by exclusively assigning to them the role of resident devil's advocate (e.g., Sutton & Callahan, 1987).
Entering the Field
A striking feature of research to build theory from case studies is the frequent overlap of data analysis with data collection. For example, Glaser and Strauss (1967) argue for joint collection, coding, and analysis of data. While many researchers do not achieve this degree of overlap, most maintain some overlap.
Field notes, a running commentary to oneself and/or research team, are an important means of accomplishing this overlap. As described by Van Maanen (1988), field notes are an ongoing stream-of-consciousness commentary about what is happening in the research, involving both observation and analysis—preferably separated from one another.
One key to useful field notes is to write down whatever impressions occur, that is, to react rather than to sift out what may seem important, because it is often difficult to know what will and will not be useful in the future. A second key to successful field notes is to push thinking in these notes by asking questions such as “What am I learning?” and “How does this case differ from the last?” For example, Burgelman (1983) kept extensive idea booklets to record his ongoing thoughts in a study of internal corporate venturing. These ideas can be [Page 16] cross-case comparisons, hunches about relationships, anecdotes, and informal observations. Team meetings, in which investigators share their thoughts and emergent ideas, are also useful devices for overlapping data collection and analysis.
Overlapping data analysis with data collection not only gives the researcher a head start in analysis but, more importantly, allows researchers to take advantage of flexible data collection. Indeed, a key feature of theory-building case research is the freedom to make adjustments during the data collection process. These adjustments can be the addition of cases to probe particular themes which emerge. Gersick (1988), for example, added several cases to her original set of student teams in order to more closely observe transition point behaviors among project teams. These transition point behaviors had unexpectedly proved interesting, and Gersick added cases in order to focus more closely on the transition period.
Additional adjustments can be made to data collection instruments, such as the addition of questions to an interview protocol or questions to a questionnaire (e.g., Harris & Sutton, 1986). These adjustments allow the researcher to probe emergent themes or to take advantage of special opportunities which may be present in a given situation. In other situations adjustments can include the addition of data sources in selected cases. For example, Sutton and Callahan (1987) added observational evidence for one case when the opportunity to attend creditors' meetings arose, and Burgelman (1983) added interviews with individuals whose importance became clear during data collection. Leonard-Barton (1988) went even further by adding several experiments to probe her emergent theory in a study of the implementation of technical innovations.
These alterations create an important question: Is it legitimate to alter and even add data collection methods during a study? For theory-building research, the answer is “yes,” because investigators are trying to understand each case individually and in as much depth as is feasible. The goal is not to produce summary statistics about a set of observations. Thus, if a new data collection opportunity arises or if a new line of thinking emerges during the research, it makes sense to take advantage by altering data collection, if such an alteration is likely to better ground the theory or to provide new theoretical insight. This flexibility is not a license to be unsystematic. Rather, this flexibility is controlled opportunism in which researchers take advantage of [Page 17] the uniqueness of a specific case and the emergence of new themes to improve resultant theory.
Analyzing Within-Case Data
Analyzing data is the heart of building theory from case studies, but it is both the most difficult and the least codified part of the process. Since published studies generally describe research sites and data collection methods, but give little space to discussion of analysis, a huge chasm often separates data from conclusions. As Miles and Huberman (1984, p. 16) wrote: “One cannot ordinarily follow how a researcher got from 3600 pages of field notes to the final conclusions, sprinkled with vivid quotes though they may be.” However, several key features of analysis can be identified.
One key step is within-case analysis. The importance of within-case analysis is driven by one of the realities of case study research: a staggering volume of data. As; Pettigrew (1988) described, there is an ever-present danger of “death by data asphyxiation.” For example, Mintzberg and McHugh (1985) examined over 2500 movies in their study of strategy making at the National Film Board of Canada—and that was only part of their evidence. The volume of data is all the more daunting because the research problem is often open-ended. Within-case analysis can help investigators cope with this deluge of data.
Within-case analysis typically involves detailed case study write-ups for each site. These write-ups are often simply pure descriptions, but they are central to the generation of insight (Gersick, 1988; Pettigrew, 1988) because they help researchers to cope early in the analysis process with the often enormous volume of data. However, there is no standard format for such analysis. Quinn (1980) developed teaching cases for each of the firms in his study of strategic decision making in six major corporations as a prelude to his theoretical work. Mintzberg and McHugh (1985) compiled a 383-page case history of the National Film Board of Canada. These authors coupled narrative description with extensive use of longitudinal graphs tracking revenue, film sponsorship, staffing, film subjects, and so on. Gersick (1988) prepared transcripts of team meetings. Leonard-Barton (1988) used tabular displays and graphs of information about each case. Abbott (1988) suggested using sequence analysis to organize longitudinal data. In fact, there are probably as many approaches as researchers. However, the overall idea [Page 18] is to become intimately familiar with each case as a stand-alone entity. This process allows the unique patterns of each case to emerge before investigators push to generalize patterns across cases. In addition, it gives investigators a rich familiarity with each case which, in turn, accelerates cross-case comparison.
Searching for Cross-Case Patterns
Coupled with within-case analysis is cross-case search for patterns. The tactics here are driven by the reality that people are notoriously poor processors of information. They leap to conclusions based on limited data (Kahneman & Tversky, 1973), they are overly influenced by the vividness (Nisbett & Ross, 1980) or by more elite respondents (Miles & Huberman, 1984), they ignore basic statistical properties (Kahneman & Tversky, 1973), or they sometimes inadvertently drop disconfirming evidence (Nisbett & Ross, 1980). The danger is that investigators reach premature and even false conclusions as a result of these information-processing biases. Thus, the key to good cross-case comparison is counteracting these tendencies by looking at the data in many divergent ways.
One tactic is to select categories or dimensions, and then to look for within-group similarities coupled with intergroup differences. Dimensions can be suggested by the research problem or by existing literature, or the researcher can simply choose some dimensions. For example, in a study of strategic decision making, Bourgeois and Eisenhardt (1988) sifted cases into various categories including founder-run vs. professional management, high vs. low performance, first vs. second generation product, and large vs. small size. Some categories such as size and product generation revealed no clear patterns, but others such as performance led to important patterns of within-group similarity and across-group differences. An extension of this tactic is to use a 2 × 2 or other cell design to compare several categories at once, or to move to a continuous measurement scale which permits graphing.
A second tactic is to select pairs of cases and then to list the similarities and differences between each pair. This tactic forces researchers to look for the subtle similarities and differences between cases. The juxtaposition of seemingly similar cases by a researcher looking for differences can break simplistic frames. In the same way, the search [Page 19] for similarity in a seemingly different pair also can lead to more sophisticated understanding. The result of these forced comparisons can be new categories and concepts which the investigators did not anticipate. For example, Eisenhardt and Bourgeois (1988) found that CEO power differences dominated initial impressions across firms. However, this paired comparison process led the researchers to see that the speed of the decision process was equally important (Eisenhardt, in press). Finally, an extension of this tactic is to group cases into threes or fours for comparison.
A third strategy is to divide the data by data source. For example, one researcher combs observational data, while another reviews interviews, and still another works with questionnaire evidence. This tactic was used in the separation of the analyses of qualitative and quantitative data in a study of strategic decision making (Bourgeois & Eisenhardt, 1988; Eisenhardt & Bourgeois, 1988). This tactic exploits the unique insights possible from different types of data collection. When a pattern from one data source is corroborated by the evidence from another, the finding is stranger and better grounded. When evidence conflicts, the researcher can sometimes reconcile the evidence through deeper probing of the meaning of the differences. At other times, this conflict exposes a spurious or random pattern, or biased thinking in the analysis. A variation of this tactic is to split the data into groups of cases, focusing on one group of cases initially, while later focusing on the remaining cases. Gersick (1988) used this tactic in separating the analyses of the student group cases from her other cases.
Overall, the idea behind these cross-case searching tactics is to force investigators to go beyond initial impressions, especially through the use of structured and diverse lenses on the data. These tactics improve the likelihood of accurate and reliable theory, that is, a theory with a close fit with the data. Also, cross-case searching tactics enhance the probability that the investigators will capture the novel findings which may exist in the data.
Shaping Hypotheses
From the within-site analysis plus various cross-site tactics and overall impressions, tentative themes, concepts, and possibly even relationships between variables begin to emerge. The next step of this highly iterative process is to compare systematically the emergent [Page 20] frame with the evidence from each case in order to assess how well or poorly it fits with case data. The central idea is that researchers constantly compare theory and data—iterating toward a theory which closely fits the data. A close fit is important to building good theory because it takes advantage of the new insights possible from the data and yields an empirically valid theory.
One step in shaping hypotheses is the sharpening of constructs. This is a two-part process involving (1) refining the definition of the construct and (2) building evidence which measures the construct in each case. This occurs through constant comparison between data and constructs so that accumulating evidence from diverse sources converges on a single, well-defined construct. For example, in their study of stigma management in bankruptcy, Sutton and Callahan (1987) developed constructs which described the reaction of customers and other parties to the declaration of bankruptcy by the focal firms. The iterative process involved data from multiple sources: initial semi-structured telephone conversations; interviews with key informants including the firm's president, other executives, a major creditor, and a lawyer; U.S. Bankruptcy Court records; observation of a creditors' meeting; and secondary source material including newspaper and magazine articles and firm correspondence. The authors iterated between constructs and these data. They eventually developed definitions and measures for several constructs: disengagement, bargaining for a more favorable exchange relationship, denigration via rumor, and reduction in the quality of participation.
This process is similar to developing a single construct measure from multiple indicators in hypothesis-testing research. That is, researchers use multiple sources of evidence to build construct measures, which define the construct and distinguish it from other constructs. In effect, the researcher is attempting to establish construct validity. The difference is that the construct, its definition, and measurement often emerge from the analysis process itself, rather than being specified a priori. A second difference is that no technique like factor analysis is available to collapse multiple indicators into a single construct measure. The reasons are that the indicators may vary across cases (i.e., not all cases may have all measures), and qualitative evidence (which is common in theory-building research) is difficult to collapse. Thus, many researchers rely on tables which summarize and tabulate the evidence underlying the construct (Miles & Huberman, 1984; [Page 21] Sutton & Callahan, 1987). For example, Table 1.3 is a tabular display of the evidence grounding the CEO power construct used by Eisenhardt and Bourgeois (1988), which included qualitative personality descriptions, quantitative scores from questionnaires, and quotation examples. The reasons for defining and building evidence for a construct apply in theory-building research just as they do in traditional, hypothesis-testing work. That is, careful construction of construct definitions and evidence produces the sharply defined, measurable constructs which are necessary for strong theory.
A second step in shaping hypotheses is verifying that the emergent relationships between constructs fit with the evidence in each case. Sometimes a relationship is confirmed by the case evidence, while other times it is revised, disconfirmed, or thrown out for insufficient evidence. This verification process is similar to that in traditional hypothesis testing research. The key difference is that each hypothesis is examined for each case, not for the aggregate cases. Thus, the underlying logic is replication, that is, the logic of treating a series of cases as a series of experiments with each case serving to confirm or disconfirm the hypotheses (Yin, 1984). Each case is analogous to an experiment, and multiple cases are analogous to multiple experiments. This contrasts with the sampling logic of traditional, within-experiment, hypothesis-testing research in which the aggregate relationships across the data points are tested using summary statistics such as F values (Yin, 1984).
In replication logic, cases which confirm emergent relationships enhance confidence in the validity of the relationships. Cases which disconfirm the relationships often can provide an opportunity to refine and extend the theory. For example, in the study of the politics of strategic decision making;, Eisenhardt and Bourgeois (1988) found a case which did not fit with the proposition that political coalitions have stable memberships. Further examination of this disconfirming case indicated that the executive team in this case had been newly formed at the time of the study. This observation plus replication in another case led to a refinement in the emergent theory to indicate that increasing stabilization of coalitions occurs over time.
At this point, the qualitative data are particularly useful for understanding why or why not emergent relationships hold. When a relationship is supported, the qualitative data often provide a good understanding of the dynamics underlying the relationship, that is, the [Page 22] “why” of what is happening. This is crucial to the establishment of internal validity. Just as in hypothesis-testing research an apparent relationship may simply be a spurious correlation or may reflect the impact of some third variable on each of the other two. Therefore, it is important to discover the underlying theoretical reasons for why the relationship exists. This helps to establish the internal validity of the findings. For example, in her study of project groups, Gersick (1988) identified a midpoint transition in the lives of most project groups. She then used extensive qualitative data to understand the cognitive and motivational reasons why such abrupt and precisely timed transitions occur.
[Page 23] [Page 24] Overall, shaping hypotheses in theory-building research involves measuring constructs and verifying relationships. These processes are similar to traditional hypothesis-testing research. However, these processes are more judgmental in theory-building research because researchers cannot apply statistical tests such as an F statistic. The research team must judge the strength and consistency of relationships within and across cases and also fully display the evidence and procedures when the findings are published, so that readers may apply their own standards.
Enfolding Literature
An essential feature of theory building is comparison of the emergent concepts, theory, or hypotheses with the extant literature. This involves asking what is this similar to, what does it contradict, and why. A key to this process is to consider a broad range of literature.
Examining literature which conflicts with the emergent theory is important for two reasons. First, if researchers ignore conflicting findings, then confidence in the findings is reduced. For example, readers may assume that the results are incorrect (a challenge to internal validity), or if correct, are idiosyncratic to the specific cases of the study (a challenge to generalizability). Second and perhaps more importantly, conflicting literature represents an opportunity. The juxtaposition of conflicting results forces researchers into a more creative, framebreaking mode of thinking than they might otherwise be able to achieve. The result can be deeper insight into both the emergent theory and the conflicting literature, as well as sharpening of the limits to generalizability of the focal research. For example, in their study of strategy making at [Page 25] the National Film Board of Canada, Mintzberg and McHugh (1985) noted conflicts between their findings for this highly creative organization and prior results at Volkswagenwerk and other sites. In the earlier studies, they observed differences in the patterns of strategic change whereby periods of convergence were long and periods of divergence were short and very abrupt. In contrast, the National Film Board exhibited a pattern of regular cycles of convergence and divergence, coupled with a long-term trend toward greater diversity. This and other conflicts allowed these researchers to establish the unique features of strategy making in an “adhocracy” in relief against “machine bureaucracies” and “entrepreneurial firms.” The result was a sharper theory of strategy formation in all three types of organizations. Similarly, in a study of politics, Eisenhardt and Bourgeois (1988) contrasted the finding that centralized power leads to politics with the previous finding that decentralized power creates politics. These conflicting findings forced the probing of both the evidence and conflicting research to discover the underlying reasons for the conflict. An underlying similarity in the apparently dissimilar situations was found. That is, both power extremes create a climate of frustration, which leads to an emphasis on self-interest and ultimately politics. In these extreme situations, the “structure of the game” becomes an interpersonal competition among the executives. In contrast, the research showed that an intermediate power distribution fosters a sense of personal efficacy among executives and ultimately collaboration, not politics, for the good of the entire group. This reconciliation integrated the conflicting findings into a single theoretical perspective, and raised the theoretical level and generalizability of the results.
Literature discussing similar findings is important as well because it ties together underlying similarities in phenomena normally not associated with each other. The result is often a theory with stronger internal validity, wider generalizability, and higher conceptual level. For example, in her study of technological innovation in a major computer corporation, Leonard-Barton (1988) related her findings on the mutual adaptation of technology and the host organization to similar findings in the education literature. In so doing, Leonard-Barton strengthened the confidence that her findings were valid and generalizable because others had similar findings in a very different context. Also, the tie to mutual adaptation processes in the education setting sharpened and enriched the conceptual level of the study.
Similarly, Gersick (1988) linked the sharp midpoint transition in project group development to the more general punctuated equilibrium phenomenon, to the literature on the adult midlife transition, and to strategic transitions within organizations. This linkage with a variety of literature in other contexts raises the readers' confidence that Gersick had observed a valid phenomenon within her small number of project teams. It also allowed her to elevate the conceptual level of her findings to the more fundamental level of punctuated equilibrium, and strengthen their likely generalizability to other project teams. Finally, Burgelman (1983) strengthened the theoretical scope and validity of his work by tying his results on the process of new venture development in a large corporation to the selection arguments of population ecology. The result again was a higher conceptual level for his findings and enhanced confidence in their validity.
Overall, tying the emergent theory to existing literature enhances the internal validity, generalizability, and theoretical level of theory building from case study research. While linking results to the literature is important in most research, it is particularly crucial in theory-building research because the findings often rest on a very limited number of cases. In this situation, any further corroboration of internal validity or generalizability is an important improvement.
Reaching Closure
Two issues are important in reaching closure: when to stop adding cases, and when to stop iterating between theory and data. In the first, ideally, researchers should stop adding cases when theoretical saturation is reached. (Theoretical saturation is simply the point at which incremental learning is minimal because the researchers are observing phenomena seen before; Glaser & Strauss, 1967.) This idea is quite similar to ending the revision of a manuscript when the incremental improvement in its quality is minimal. In practice, theoretical saturation often combines with pragmatic considerations such as time and money to dictate when case collection ends. In fact, it is not uncommon for researchers to plan the number of cases in advance. For example, the Warwick group planned their study of strategic change and competitiveness in British firms to include eight firms (Pettigrew, 1988). This kind of planning may be necessary because of the availability of resources and because time constraints force researchers to develop [Page 27] cases in parallel. Finally, while there is no ideal number of cases, a number between 4 and 10 cases usually works well. With fewer than 4 cases, it is often difficult to generate theory with much complexity, and its empirical grounding is likely to be unconvincing, unless the case has several mini-cases within it, as did the Mintzberg and McHugh study of the National Film Board of Canada. With more than 10 cases, it quickly becomes difficult to cope with the complexity and volume of the data.
In the second closure issue, when to stop iterating between theory and data, again, saturation is the key idea. That is, the iteration process stops when the incremental improvement to theory is minimal. The final product of building theory from case studies may be concepts (e.g., the Mintzberg & Waters, 1982, deliberate and emergent strategies), a conceptual framework (e.g., Harris & Sutton's, 1986, framework of bankruptcy), or propositions or possibly mid-range theory (e.g., Eisenhardt & Bourgeois's, 1988, mid-range theory of politics in high velocity environments). On the downside, the final product may be disappointing. The research may simply replicate prior theory, or there may be no clear patterns within the data. The steps for building theory from case studies are summarized in Table 1.1.
Comparison with Other Literature
The process described here has similarities with the work of others. For example, I have drawn upon the ideas of theoretical sampling, theoretical saturation, and overlapped coding, data collection, and analysis from Glaser and Strauss (1967). Also, the notions of case study design, replication logic, and concern for internal validity have been incorporated from Yin (1984). The tools of tabular display of evidence from Miles and Huberman (1984) were particularly helpful in the discussion of building evidence for constructs.
However, the process described here has important differences from previous work. First, it is focused on theory building from cases. In contrast, with the exception of Glaser and Strauss (1967), previous work was centered on other topics such as qualitative data analysis (e.g., Miles, 1979; Miles & Huberman, 1984; Kirk & Miller, 1986), case study design (Yin, 1981, 1984; McClintock et al., 1979), and ethnography (Van Maanen, 1983). To a large extent, Glaser and Strauss (1967) [Page 28] focused on defending building theory from cases, rather than on how actually to do it. Thus, while these previous writings provide pieces of the process, they do not provide (nor do they intend to provide) a framework for theory building from cases as developed here.
Second, the process described here contributes new ideas. For example, the process includes a priori specification of constructs, population specification, flexible instrumentation, multiple investigators, cross-case analysis tactics, and several uses of literature. Their inclusion plus their illustration using examples from research studies and comparison with traditional, hypothesis-testing research synthesizes, extends, and adds depth to existing views of theory-building research.
Third, particularly in comparison with Strauss (1987) and Van Maanen (1988), the process described here adopts a positivist view of research. That is, the process is directed toward the development of testable hypotheses and theory which are generalizable across settings. In contrast, authors like Strauss and Van Maanen are more concerned that a rich, complex description of the specific cases under study evolve and they appear less concerned with development of generalizable theory.
The process of building theory from case study research is a strikingly iterative one. While an investigator may focus on one part of the process at a time, the process itself involves constant iteration backward and forward between steps. For example, an investigator may move from cross-case comparison, back to redefinition of the research question, and out to the field to gather evidence on an additional case. Also, the process is alive with tension between divergence into new ways of understanding the data and convergence onto a single theoretical framework. For example, the process involves the use of multiple investigators and multiple data collection methods as well as a variety of cross-case searching tactics. Each of these tactics involves viewing evidence from diverse perspectives. However, the process also involves converging on construct definitions, measures, and a framework for structuring the findings. Finally, the process described here is intimately tied with empirical evidence.
Strengths of Theory Building From Cases
One strength of theory building from cases is its likelihood of generating novel theory. Creative insight often arises from the juxtaposition of contradictory or paradoxical evidence (Cameron & Quinn, 1988). As Bartunek (1988) argued, the process of reconciling these contradictions forces individuals to reframe perceptions into a new gestalt. Building theory from case studies centers directly on this kind of juxtaposition. That is, attempts to reconcile evidence across cases, types of data, and different investigators, and between cases and literature increase the likelihood of creative refraining into a new theoretical vision. Although a myth surrounding theory building from case studies is that the process is limited by investigators' preconceptions, in fact, just the opposite is true. This constant juxtaposition of conflicting realities tends to “unfreeze” thinking, and so the process has the potential to generate theory with less researcher bias than theory built from incremental studies or armchair, axiomatic deduction.
A second strength is that the emergent theory is likely to be testable with constructs that can be readily measured and hypotheses that can be proven false. Measurable constructs are likely because they have already been measured during the theory-building process. The resulting hypotheses are likely to be verifiable for the same reason. That is, they have already undergone repeated verification during the theory-building process. In contrast, theory which is generated apart from direct evidence may have testability problems. For example, population ecology researchers borrowed the niche concept from biology. This construct has proven difficult to operationalize for many organizational researchers, other than its originators. One reason may be its obscure definition, which hampers measurability: “… that area in constraint space (the space whose dimensions are levels of resources, etc.) in which the population outcompetes all other local populations” (Hannan & Freeman, 1977, p. 947). One might ask: How do you measure an area in constraint space?
A third strength is that the resultant theory is likely to be empirically valid. The likelihood of valid theory is high because the theory-building process is so intimately tied with evidence that it is very likely that the resultant theory will be consistent with empirical observation. In well-executed theory-building research, investigators answer to the data from the beginning of the research. This closeness can lead to an [Page 30] intimate sense of things—“how they feel, smell, seem” (Mintzberg, 1979). This intimate interaction with actual evidence often produces theory which closely mirrors reality.
Weaknesses of Theory Building From Cases
However, some characteristics that lead to strengths in theory building from case studies also lead to weaknesses. For example, the intensive use of empirical evidence can yield theory which is overly complex. A hallmark of good theory is parsimony, but given the typically staggering volume of rich data, there is a temptation to build theory which tries to capture everything. The result can be theory which is very rich in detail, but lacks the simplicity of overall perspective. Theorists working from case data can lose their sense of proportion as they confront vivid, voluminous data. Since they lack quantitative gauges such as regression results or observations across multiple studies, they may be unable to assess which are the most important relationships and which are simply idiosyncratic to a particular case.
Another weakness is that building theory from cases may result in narrow and idiosyncratic theory. Case study theory building is a bottom-up approach such that the specifics of data produce the generalizations of theory. The risks are that the theory describes a very idiosyncratic phenomenon or that the theorist is unable to raise the level of generality of the theory. Indeed, many of the grounded case studies mentioned earlier resulted in modest theories. For example, Gersick (1988) developed a model of group development for teams with project deadlines, Eisenhardt and Bourgeois (1988) developed a mid-range theory of politics in high velocity environments, and Burgelman (1983) proposed a model of new product ventures in large corporations. Such theories are likely to be testable, novel, and empirically valid, but they do lack the sweep of theories like resource dependence, population ecology, and transaction cost. They are essentially theories about specific phenomena. To their credit, many of these theorists tie into broader theoretical issues such as adaptation, punctuated equilibrium, and bounded rationality, but ultimately they are not theories about organization in any grand sense. Perhaps “grand” theory requires multiple studies—an accumulation of both theory-building and theory-testing empirical studies.
Applicability
When is it appropriate to conduct theory-building case study research? In normal science, theory is developed through incremental empirical testing and extension (Kuhn, 1970). Thus, the theory-building process relies on past literature and empirical observation or experience as well as on the insight of the theorist to build incrementally more powerful theories. However, there are times when little is known about a phenomenon, current perspectives seem inadequate because they have little empirical substantiation, or they conflict with each other or common sense. Or, sometimes, serendipitous findings in a theory-testing study suggest the need for a new perspective. In these situations, theory building from case study research is particularly appropriate because theory building from case studies does not rely on previous literature or prior empirical evidence. Also, the conflict inherent in the process is likely to generate the kind of novel theory which is desirable when extant theory seems inadequate. For example, Van de Ven and Poole (in press) have argued that such an approach is especially useful for studying the new area of longitudinal change processes. In sum, building theory from case study research is most appropriate in the early stages of research on a topic or to provide freshness in perspective to an already researched topic.
How should theory-building research using case studies be evaluated? To begin, there is no generally accepted set of guidelines for the assessment of this type of research. However, several criteria seem appropriate. Assessment turns on whether the concepts, framework, or propositions that emerge from the process are “good theory.” After all, the point of the process is to develop or at least begin to develop theory. Pfeffer (1982) suggested that good theory is parsimonious, testable, and logically coherent, and these criteria seem appropriate here. Thus, a strong theory-building study yields good theory (that is, parsimonious, testable, and logically coherent theory) which emerges at the end, not beginning, of the study.
Second, the assessment of theory-building research also depends upon empirical issues: strength of method and the evidence grounding the theory. Have the investigators followed a careful analytical [Page 32] procedure? Does the evidence support the theory? Have the investigators ruled out rival explanations? Just as in other empirical research, investigators should provide information on the sample, data collection procedures, and analysis. Also, they should display enough evidence for each construct to allow readers to make their own assessment of the fit with theory. While there are no concise measures such as correlation coefficients or F values, nonetheless thorough reporting of information should give confidence that the theory is valid. Overall, as in hypothesis testing, a strong theory-building study has a good, although not necessarily perfect, fit with the data.
Finally, strong theory-building research should result in new insights. Theory building which simply replicates past theory is, at best, a modest contribution. Replication is appropriate in theory-testing research, but in theory-building research, the goal is new theory. Thus, a strong theory-building study presents new, perhaps frame-breaking, insights.
Conclusions
The purpose of this article is to describe the process of theory building from case studies. The process, outlined in Table 1.1, has features which range from selection of the research question to issues in reaching closure. Several conclusions emerge.
Theory developed from case study research is likely to have important strengths like novelty, testability, and empirical validity, which arise from the intimate linkage with empirical evidence. Second, given the strengths of this theory-building approach and its independence from prior literature or past empirical observation, it is particularly well-suited to new research areas or research areas for which existing theory seems inadequate. This type of work is highly complementary to incremental theory building from normal science research. The former is useful in early stages of research on a topic or when a fresh perspective is needed, while the latter is useful in later stages of knowledge. Finally, several guidelines for assessing the quality of theory building from case studies have been suggested. Strong studies are those which present interesting or framebreaking theories which meet [Page 33] the tests of good theory or concept development (e.g., parsimony, testability, logical coherence) and are grounded in convincing evidence.
Most empirical studies lead from theory to data. Yet, the accumulation of knowledge involves a continual cycling between theory and data. Perhaps this article will stimulate some researchers to complete the cycle by conducting research that goes in the less common direction from data to theory, and equally important, perhaps it will help others become informed consumers of the results.
Theories and Analysis
Understanding and Validity in Qualitative Research
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Please sign into your institution before accessing your profile
Sign up for a free trial and experience all Sage Learning Resources have to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
Sign up for a free trial and experience all Sage Learning Resources has to offer.
- view my profile
- view my lists
IMAGES
VIDEO
COMMENTS
Kathleen M. Eisenhardt, Building Theories from Case Study Research, The Academy of Management Review, Vol. 14, No. 4 (Oct., 1989), pp. 532-550
We consider two existing approaches to linking case study and theory, an inductive, theory-building approach (theory for the case) and a deductive, theory-testing approach (theory from the case), and propose a bidirectional dialogic approach.
This paper describes the process of inducting theory using case studies—from specifying the research questions to reaching closure. Some features of the process, such as problem definition and construct validation, are similar to hypothesis-testing research.
Building theory from case studies is a research strategy that involves using one or more cases to create theoretical constructs, propositions and/or midrange theory from case-based, empirical evi dence (Eisenhardt, 1989b). Case studies are rich, empirical descriptions of particular instances of a phenomenon that are typically based on a variety of
Building theories from case study research. K. Eisenhardt. Published in STUDI ORGANIZZATIVI 1 October 1989. Education. - This paper describes the process of inducting theory using case studies from specifying the research questions to reaching closure.
This article summarizes the key points of Eisenhardt's (1989), “Building theories from case study research,” and its impact on research in management and marketing.
Scholars have used case studies to develop theory about topics as diverse as group process. Building theory from case studies is a research strategy that involves using one or more cases to create theoretical constructs, propositions and/or midrange theory from case-based, empirical evidence.
Abstract. This essay sharpens and refreshes the multi-case theory-building approach, sometimes termed The “Eisenhardt Method.” The Method’s singular aim is theory building, especially with multiple cases and theoretical logic.
This article discusses the research strategy of theory building from cases, particularly multiple cases. Such a strategy involves using one or more cases to create theoretical constructs, propositions, and/or midrange theory from case-based, empirical evidence.
Building Theories From Case Study Research. Development of theory is a central activity in organizational research. Traditionally, authors have developed theory by combining observations from previous literature, common sense, and experience.