Differences among fixed-, random-, and Mixed-Effects Models

It is easiest to begin with the simple case in which you are interested only in the mean effect size among a set of studies, both in identifying the mean effect size and in computing its standard errors for inferential testing or for computing of confidence intervals. Even in this simple case, there are a number of conceptual, analytic, and interpretive differences between fixed- and random-effects meta-analytic models (see also Hedges & Vevea, 1998; Kisamore & Brannick, 2008).

1. Conceptual Differences

The conceptual differences between fixed- and random-effects models can be illustrated through Figure 8.1, which I have reproduced in the top of Figure 10.1. As you recall, the top of Figure 10.1 displays effect sizes from five stud­ies, all (or at least most) of which have confidence intervals that overlap with a single population effect size, now denoted with 0 using traditional symbol conventions (e.g., Hedges & Vevea, 1998). This overlap with a single popu­lation effect size, with deviations of study effect sizes due to only sampling fluctuations (i.e., study-specific confidence intervals), represents the fixed- effects model of meta-analysis.

The bottom portion of Figure 10.1 displays the random-effects model. Here, the confidence intervals of the individual study effect sizes do not nec­essarily overlap with a single population effect size. Instead, they overlap with a distribution of population effect sizes. In other words, random-effects models conceptualize a population distribution of effect sizes, rather than a single effect size as in the fixed-effects model. In a random-effects model, you estimate not only a single population mean effect size (0), but rather a distribution of population effect sizes represented by a central tendency (p) and standard deviation (t).

2. Analytic differences

These conceptual differences in fixed- versus random-effects models can also be expressed in equation form. These equations help us understand the com­putational differences between these two models, described in Section 10.2.

Equation 10.1 expresses this fixed-effects model of study effect sizes being a function of a population effect size and sampling error:

In this fixed-effects model, the effect sizes for each study (ESj) are assumed to be a function of two components: a single population effect size (0) and the deviation of this study from this population effect size (£;). The population effect size is unknown but is estimated as the weighted average of effect sizes across studies (this is often one of the key values you want to obtain in your meta-analysis). The deviation of any one study’s effect size from this population effect size (£;) is unknown and unknowable, but the dis­tribution of these deviations across studies can be inferred from the standard errors of the studies. The test of heterogeneity (Chapter 8) is a test of the null hypothesis that this variability in deviations is no more than what you expect given sampling fluctuations alone (i.e., homogeneity), whereas the alternative hypothesis is that these deviations are more than would be expected by sam­pling fluctuations alone (i.e., heterogeneity).

I indicated in Chapter 9 that the presence of significant heterogeneity might prompt us to evaluate moderators to systematically explain this hetero­geneity. An alternative approach would be to model this heterogeneity within a random-effects model. Conceptually, this approach involves estimating not only a mean population effect size, but also the variability in study effect sizes due to the population variability in effect sizes. These two estimates are shown in the bottom of Figure 10.1 as p (mean population effect size) and t (population variability in effect sizes). In equation form, this means that you would conceptualize each study effect size arising from three sources:

As shown by comparing the equations for fixed- versus random-effects models (Equation 10.1 vs. Equation 10.2, respectively), the critical difference is that the single parameter of the fixed-effects model, the single population effect size (0), is decomposed into two parameters (the central tendency and study deviation, |J, and tj) in the random-effects model. As I describe in more detail in Section 10.2, the central tendency of this distribution of population effect sizes is best estimated by the weighted mean of effect sizes from the studies (though with a different weight than used in a fixed-effects model). The challenge of the random-effects model is to determine how much of the variability in each study’s deviation from this mean is due to the distribution of population effect sizes (^s, sometimes called the random-effects variance; e.g., Raudenbush, 1994) versus sampling fluctuations (Ejs, sometimes called the estimation variance). Although this cannot be determined for any single study, random-effects models allow you to partition this variability across the collection of studies in your meta-analysis. I describe these computations in Section 10.2.

3. Interpretive Differences

Before turning to these analyses, however, it is useful to think of the differ­ent interpretations that are justified when using fixed- versus random-effect models. Meta-analysts using fixed-effects models are only justified in drawing conclusions about the specific set of studies included in their meta-analysis (what are sometimes termed conditional inferences; e.g., Hedges & Vevea, 1998). In other words, if you use a fixed-effects model, you should limit your conclusions to statements of the “these studies find . . . ” type.

The use of random-effects models justifies inferences that generalize beyond the particular set of studies included in the meta-analysis to a popu­lation of potential studies of which those included are representative (what are sometimes termed unconditional inferences; Hedges & Vevea, 1998). In other words, random-effects models allow for more generalized statements of the “the literature finds . . . ” or even “there is this magnitude of association between X and Y” type (note the absence of any “these studies” qualifier).1 Although meta-analysts generally strive to be comprehensive in their inclu­sion of relevant studies in their meta-analyses (see Chapter 3), the truth is that there will almost always be excluded studies about which you still might wish to draw conclusions. These excluded studies include not only those that exist that you were not able to locate, but also similar studies that might be conducted in the future or even studies that contain unique permutations of methodology, sample, and measures that are similar to your sampled studies but simply have not been conducted.

I believe that most meta-analysts wish to make the latter, generalized statements (unconditional inferences) most of the time, so random-effects models are more appropriate. In fact, I often read meta-analyses in which the authors try to make these conclusions even when they used fixed-effects models; such conclusions are inappropriate. I recommend that you frame your conclusions carefully in ways that are appropriate given your statistical model (i.e., fixed- vs. random-effects), and consider the conclusions you wish to make when deciding between these models. I return to this and other con­siderations in selecting between fixed- and random-effects models in Section 10.5.

Source: Card Noel A. (2015), Applied Meta-Analysis for Social Science Research, The Guilford Press; Annotated edition.

Leave a Reply

Your email address will not be published. Required fields are marked *