Comparing Case Studies with Other Research Methods in the Social Sciences

When and why would you want to do case studies on some topic? Should you consider doing an experiment instead? A survey? A history? An analysis of archival records, such as modeling economic trends or student performance in schools?1

These and other choices represent different research methods. Each is a differ­ent way of collecting and analyzing empirical evidence, following its own logic. And each method has its own advantages and disadvantages. To get the most out of using the case study method, you need to appreciate these differences.

A common misconception is that the various research methods should be arrayed hierarchically. Many social scientists still deeply believe that case studies are only appropriate for the exploratory phase of an investigation, that surveys and histories are appropriate for the descriptive phase, and that exper­iments are the only way of doing explanatory or causal inquiries. This hierar­chical view reinforces the idea that case studies are only a preliminary research method and cannot be used to describe or test propositions.

This hierarchical view, however, may be questioned. Experiments with an exploratory motive have certainly always existed. In addition, the development of causal explanations has long been a serious concern of historians, reflected by the subfield known as historiography. Likewise, case studies are far from being only an exploratory strategy. Some of the best and most famous case studies have been explanatory case studies (e.g., see BOX 1 for a vignette on Allison and Zelikow’s Essence of Decision: Explaining the Cuban Missile Crisis, 1999). Similarly, famous descriptive case studies are found in major disciplines such as sociology and political science (e.g., see BOX 2 for two vignettes). Additional examples of explanatory case studies are presented in their entirety in a com­panion book cited throughout this text (Yin, 2003, chaps. 4-7). Examples of descriptive case studies are similarly found there (Yin, 2003, chaps. 2 and 3).

Distinguishing among the various research methods and their advantages and disadvantages may require going beyond the hierarchical stereotype. The more appropriate view may be an inclusive and pluralistic one: Every research method can be used for all three purposes—exploratory, descriptive, and explanatory. There may be exploratory case studies, descriptive case studies, or explanatory case studies. Similarly, there may be exploratory experiments, descriptive experiments, and explanatory experiments. What distinguishes the different methods is not a hierarchy but three important conditions discussed below. As an important caution, however, the clarification does not imply that the boundaries between the methods—or the occasions when each is to be used—are always sharp. Even though each method has its distinctive charac­teristics, there are large overlaps among them. The goal is to avoid gross misfits—that is, when you are planning to use one type of method but another is really more advantageous.

1. When to Use Each Method

The three conditions consist of (a) the type of research question posed, (b) the extent of control an investigator has over actual behavioral events, and (c) the degree of focus on contemporary as opposed to historical events. Figure 1.1 displays these three conditions and shows how each is related to the five major research methods being discussed: experiments, surveys, archival analyses, histories, and case studies. The importance of each condition, in dis­tinguishing among the five methods, is as follows.

Types of research questions (Figure 1.1, column 1). The first condition covers your research question(s) (Hedrick, Bickman, & Rog, 1993). A basic catego­rization scheme for the types of questions is the familiar series: “who,” “what,” “where,” “how,” and “why” questions.

If research questions focus mainly on “what” questions, either of two possi­bilities arises. First, some types of “what” questions are exploratory, such as “What can be learned from a study of a startup business?” This type of ques­tion is a justifiable rationale for conducting an exploratory study, the goal being to develop pertinent hypotheses and propositions for further inquiry. However, as an exploratory study, any of the five research methods can be used—for example, an exploratory survey (testing, for instance, the ability to survey startups in the first place), an exploratory experiment (testing, for instance, the potential benefits of different kinds of incentives), or an exploratory case study (testing, for instance, the importance of differentiating “first-time” startups from startups by entrepreneurs who had previously started other firms).

The second type of “what” question is actually a form of a “how many” or “how much” line of inquiry—for example, “What have been the ways that communities have assimilated new immigrants?” Identifying such ways is more likely to favor survey or archival methods than others. For example, a survey can be readily designed to enumerate the “what,” whereas a case study would not be an advantageous method in this situation.

Similarly, like this second type of “what” question, “who” and “where” questions (or their derivatives—“how many” and “how much”) are likely to favor survey methods or the analysis of archival data, as in economic stud­ies. These methods are advantageous when the research goal is to describe the incidence or prevalence of a phenomenon or when it is to be predictive about certain outcomes. The investigation of prevalent political attitudes (in which a survey or a poll might be the favored method) or of the spread of a disease like AIDS (in which an epidemiologic analysis of health statistics might be the favored method) would be typical examples.

In contrast, “how” and “why” questions are more explanatory and likely to lead to the use of case studies, histories, and experiments as the preferred research methods. This is because such questions deal with operational links needing to be traced over time, rather than mere frequencies or incidence. Thus, if you wanted to know how a community successfully overcame the negative impact of the closing of its largest employer—a military base (see Bradshaw, 1999, also presented in BOX 26, Chapter 5, p. 138)—you would be less likely to rely on a survey or an examination of archival records and might be better off doing a history or a case study. Similarly, if you wanted to know how research investigators may possibly (but unknowingly) bias their research, you could design and conduct a series of experiments (see Rosenthal, 1966).

Let us take two more examples. If you were studying “who” had suffered as a result of terrorist acts and “how much” damage had been done, you might survey residents, examine government records (an archival analysis), or con­duct a “windshield survey” of the affected area. In contrast, if you wanted to know “why” the act had occurred, you would have to draw upon a wider array of documentary information, in addition to conducting interviews; if you focused on the “why” question in more than one terrorist act, you would prob­ably be doing a multiple-case study.

Similarly, if you wanted to know “what” the outcomes of a new govern­mental program had been, you could answer this question by doing a survey or by examining economic data, depending upon the type of program involved. Questions—such as “How many clients did the program serve?” “What kinds of benefits were received?” “How often were different benefits produced?”— all could be answered without doing a case study. But if you needed to know “how” or “why” the program had worked (or not), you would lean toward either a case study or a field experiment.

To summarize, the first and most important condition for differentiating among the various research methods is to classify the type of research ques­tion being asked. In general, “what” questions may either be exploratory (in which case, any of the methods could be used) or about prevalence (in which surveys or the analysis of archival records would be favored). “How” and “why” questions are likely to favor the use of case studies, experiments, or histories.

EXERCISE 1.1 Defining a Case Study Question

Develop a “how” or “why” question that would be the rationale for a case study that you might conduct. Instead of doing a case study, now imagine that you only could do a history, a survey, or an experiment (but not a case study) in order to answer this question. What would be the distinctive advan­tage of doing a case study, compared to these other methods, in order to answer this question?

Defining the research questions is probably the most important step to be taken in a research study, so you should be patient and allow sufficient time for this task. The key is to understand that your research questions have both substance—for example, What is my study about?—and form—for example, am I asking a “who,” “what,” “where,” “why,” or “how” question? Others have focused on some of the substantively important issues (see J. P. Campbell, Daft, & Hulin, 1982); the point of the preceding discussion is that the form of the question can provide an important clue regarding the appropriate research method to be used. Remember, too, the large areas of overlap among the meth­ods, so that, for some questions, a choice among methods might actually exist. Be aware, finally, that you (or your academic department) may be predisposed to favor a particular method regardless of the study question. If so, be sure to create the form of the study question best matching the method you were pre­disposed to favor in the first place.

EXERCISE 1.2 Identifying the Research Questions Covered When Other Research Methods Are Used

Locate a research study based solely on the use of survey, historical, or exper­imental (but not case study) methods. Identify the research question(s) addressed by the study. Does the type of question differ from those that might have appeared as part of a case study on the same topic, and if so, how?

Extent of control over behavioral events (Figure 1.1, column 2) and degree of focus on contemporary as opposed to historical events (Figure 1.1, column 3). Assuming that “how” and “why” questions are to be the focus of study, a fur­ther distinction among history, case study, and experiment is the extent of the investigator’s control over and access to actual behavioral events. Histories are the preferred method when there is virtually no access or control. The distinc­tive contribution of the historical method is in dealing with the “dead” past— that is, when no relevant persons are alive to report, even retrospectively, what occurred and when an investigator must rely on primary documents, secondary documents, and cultural and physical artifacts as the main sources of evidence. Histories can, of course, be done about contemporary events; in this situation, the method begins to overlap with that of the case study.

The case study is preferred in examining contemporary events, but when the relevant behaviors cannot be manipulated. The case study relies on many of the same techniques as a history, but it adds two sources of evidence not usually included in the historian’s repertoire: direct observation of the events being studied and interviews of the persons involved in the events. Again, although case studies and histories can overlap, the case study’s unique strength is its ability to deal with a full variety of evidence—documents, artifacts, interviews, and observations—beyond what might be available in a conventional historical study. Moreover, in some situations, such as partici­pant-observation (see Chapter 4), informal manipulation can occur.

Finally, experiments are done when an investigator can manipulate behav­ior directly, precisely, and systematically. This can occur in a laboratory set­ting, in which an experiment may focus on one or two isolated variables (and presumes that the laboratory environment can “control” for all the remaining variables beyond the scope of interest), or it can be done in a field setting, where the term field or social experiment has emerged to cover research where investigators “treat” whole groups of people in different ways, such as provid­ing them with different kinds of vouchers to purchase services (Boruch & Foley, 2000). Again, the methods overlap. The full range of experimental sci­ence also includes those situations in which the experimenter cannot manipu­late behavior but in which the logic of experimental design still may be applied. These situations have been commonly regarded as “quasi-experimental” situations (e.g., D. T. Campbell & Stanley, 1966; Cook & Campbell, 1979) or “observational” studies (e.g., P. R. Rosenbaum, 2002). The quasi-experimental approach even can be used in a historical setting, where, for instance, an inves­tigator may be interested in studying race riots or lynchings (see Spilerman, 1971) and use a quasi-experimental design because no control over the behav­ioral event was possible. In this case, the experimental method begins to overlap with histories.

In the field of evaluation research, Boruch and Foley (2000) have made a compelling argument for the practicality of one type of field experiment—ran­domized field trials. The authors maintain that the field trials design, emulat­ing the design of laboratory experiments, can be and has been used even when evaluating complex community initiatives. However, you should be cautioned about the possible limitations of this design.

In particular, the design may work well when, within a community, individual consumers or users of services are the unit of analysis. Such a situation would exist if a community intervention consisted, say, of a health promotion campaign and the outcome of interest was the incidence of certain illnesses among the com­munity’s residents. The random assignment might designate a few communities to have the campaign, compared to a few that did not, and the outcomes would compare the condition of the residents in both sets of communities.

In many community studies, however, the outcomes of interest and there­fore the appropriate unit of analysis are at the community or collective level and not at the individual level. For instance, efforts to upgrade neighborhoods may be concerned with improving a neighborhood’s economic base (e.g., the number of jobs per residential population). Now, although the candidate com­munities still can be randomly assigned, the degrees of freedom in any later statistical analysis are limited by the number of communities rather than the number of residents. Most field experiments will not be able to support the participation of a sufficiently large number of communities to overcome the severity of the subsequent statistical constraints.

The limitations when communities or collective entities are the unit of analy­sis are extremely important because many public policy objectives focus on the collective rather than individual level. For instance, the thrust of federal education policy in the early 2000s focused on school performance. Schools were held accountable for year-to-year performance even though the composition of the students enrolled at the schools changed each year. Creating and implementing a field trial based on a large number of schools, as opposed to a large number of students, would present an imposing challenge and the need for extensive research resources. In fact, Boruch (2007) found that a good number of the ran­domized field trials inadvertently used the incorrect unit of analysis (individuals rather than collectives), thereby making the findings from the trials less usable.

Field experiments with a large number of collective entities (e.g., neighbor­hoods, schools, or organizations) also raise a number of practical challenges:

  • any randomly selected control sites may adopt important components of the inter­vention of interest before the end of the field experiment and no longer qualify as “no-treatment” sites;
  • the funded intervention may call for the experimental communities to reorganize their entire manner of providing certain services—that is, a “systems” change— thereby creating site-to-site variability in the unit of assignment (the experimen­tal design assumes that the unit of assignment is the same at every site, both intervention and control);
  • the same systems change aspect of the intervention also may mean that the orga­nizations or entities administering the intervention may not necessarily remain stable over the course of time (the design requires such stability until the random field trials have been completed); and
  • the experimental or control sites may be unable to continue using the same instru­ments and measures (the design, which will ultimately “group” the data to com­pare intervention sites as a group with comparison sites as a second group, requires common instruments and measures across sites).

The existence of any of these conditions will likely lead to the need to find alternatives to randomized field trials.

Summary. You should be able to identify some situations in which all research methods might be relevant (such as exploratory research) and other situations in which two methods might be considered equally attractive. You also can use multiple methods in any given study (for example, a survey within a case study or a case study within a survey). To this extent, the various methods are not mutually exclusive. But you should also be able to identify some situations in which a specific method has a distinct advantage. For the case study, this is when

  • A “how” or “why” question is being asked about
    • a contemporary set of events,
    • over which the investigator has little or no control.

To determine the questions that are most significant for a topic, as well as to gain some precision in formulating these questions requires much prepara­tion. One way is to review the literature on the topic (Cooper, 1984). Note that such a literature review is therefore a means to an end, and not—as many people have been taught to think—an end in itself. Novices may think that the purpose of a literature review is to determine the answers about what is known on a topic; in contrast, experienced investigators review previous research to develop sharper and more insightful questions about the topic.

2. Traditional Prejudices against the Case Study Method

Although the case study is a distinctive form of empirical inquiry, many research investigators nevertheless disdain the strategy. In other words, as a research endeavor, case studies have been viewed as a less desirable form of inquiry than either experiments or surveys. Why is this?

Perhaps the greatest concern has been over the lack of rigor of case study research. Too many times, the case study investigator has been sloppy, has not followed systematic procedures, or has allowed equivocal evidence or biased views to influence the direction of the findings and conclusions. Such lack of rigor is less likely to be present when using the other methods—possibly because of the existence of numerous methodological texts providing investi­gators with specific procedures to be followed. In contrast, only a small (though increasing) number of texts besides the present one cover the case study method in similar fashion.

The possibility also exists that people have confused case study teaching with case study research. In teaching, case study materials may be deliberately altered to demonstrate a particular point more effectively (e.g., Garvin, 2003). In research, any such step would be strictly forbidden. Every case study inves­tigator must work hard to report all evidence fairly, and this book will help her or him to do so. What is often forgotten is that bias also can enter into the con­duct of experiments (see Rosenthal, 1966) and the use of other research meth­ods, such as designing questionnaires for surveys (Sudman & Bradbum, 1982) or conducting historical research (Gottschalk, 1968). The problems are not different, but in case study research, they may have been more frequently encountered and less frequently overcome.

EXERCISE 1.3 Examining Case Studies Used for Teaching Purposes

Obtain a copy of a case study designed for teaching purposes (e.g., a case in a textbook used in a business school course). Identify the specific ways in which this type of “teaching” case is different from research case studies.

Does the teaching case cite primary documents, contain evidence, or display data? Does the teaching case have a conclusion? What appears to be the main objective of the teaching case?

A second common concern about case studies is that they provide little basis for scientific generalization. “How can you generalize from a single case?” is a frequently heard question. The answer is not simple (Kennedy, 1976). However, consider for the moment that the same question had been asked about an experiment: “How can you generalize from a single experi­ment?” In fact, scientific facts are rarely based on single experiments; they are usually based on a multiple set of experiments that have replicated the same phenomenon under different conditions. The same approach can be used with multiple-case studies but requires a different concept of the appro­priate research designs, discussed in detail in Chapter 2. The short answer is that case studies, like experiments, are generalizable to theoretical proposi­tions and not to populations or universes. In this sense, the case study, like the experiment, does not represent a “sample,” and in doing a case study, your goal will be to expand and generalize theories (analytic generalization) and not to enumerate frequencies (statistical generalization). Or, as three notable social scientists describe in their single case study done years ago, the goal is to do a “generalizing” and not a “particularizing” analysis (Lipset, Trow, & Coleman, 1956, pp. 419-420).2

A third frequent complaint about case studies is that they take too long, and they result in massive, unreadable documents. This complaint may be appropriate, given the way case studies have been done in the past (e.g., Feagin, Orum, & Sjoberg, 1991), but this is not necessarily the way case studies—yours included—must be done in the future. Chapter 6 discusses alternative ways of writing the case study—including ones in which the tra­ditional, lengthy narrative can be avoided altogether. Nor need case studies take a long time. This incorrectly confuses the case study method with a spe­cific method of data collection, such as ethnography (e.g., Fetterman, 1989) or participant-observation (e.g., Jorgensen, 1989). Ethnographies usually require long periods of time in the “field” and emphasize detailed, observa­tional evidence. Participant-observation may not require the same length of time but still assumes a hefty investment of field efforts. In contrast, case studies are a form of inquiry that does not depend solely on ethnographic or participant-observer data. You could even do a valid and high-quality case study without leaving the telephone or Internet, depending upon the topic being studied.

A fourth possible objection to case studies has seemingly emerged with the renewed emphasis, especially in education and related research, on randomized field trials or “true experiments.” Such studies aim to establish causal relationships—that is, whether a particular “treatment” has been effi­cacious in producing a particular “effect” (e.g., Jadad, 1998). In the eyes of many, the emphasis has led to a downgrading of case study research because case studies (and other types of nonexperimental methods) cannot directly address this issue.

Overlooked has been the possibility that case studies can offer important evidence to complement experiments. Some noted methodologists suggest, for instance, that experiments, though establishing the efficacy of a treatment (or intervention), are limited in their ability to explain “how” or “why” the treatment necessaiily worked, whereas case studies could investigate such issues (e.g., Shavelson & Townes, 2002, pp. 99-106).3 Case studies may therefore be valued “as adjuncts to experiments rather than as alternatives to them” (Cook & Payne, 2002). In clinical psychology, a “large series of single case studies,” confirming predicted behavioral changes after the initiation of treatment, even may provide additional evidence of efficaciousness (e.g., Veerman & van Yperen, 2007).

Despite the fact that these four common concerns can be allayed, as above, one major lesson is that good case studies are still difficult to do. The problem is that we have little way of screening for an investigator’s ability to do good case studies. People know when they cannot play music; they also know when they cannot do mathematics beyond a certain level, and they can be tested for other skills, such as the bar examination in law. Somehow, the skills for doing good case studies have not yet been formally defined. As a result, “most people feel that they can prepare a case study, and nearly all of us believe we can understand one. Since neither view is well founded, the case study receives a good deal of approbation it does not deserve” (Hoaglin, Light, McPeek, Mosteller, & Stoto, 1982, p. 134). This quotation is from a book by five promi­nent statisticians. Surprisingly, from another field, even they recognize the challenge of doing good case studies.

Source: Yin K Robert (2008), Case Study Research Designs and Methods, SAGE Publications, Inc; 4th edition.

Leave a Reply

Your email address will not be published. Required fields are marked *