Practical Matters: The Interplay between Meta-Analytic Models and Theory

As with any data-analytic approach, meta-analytic techniques are most valu­able when applied in the service of theories relevant to the content of your review. I place this discussion on the interplay between meta-analysis and theory in this chapter on multivariate meta-analysis because many of our theories are multivariate and therefore benefit from multivariate analyses. However, consideration of theory is important for any meta-analysis— univariate or multivariate—just as it is for any form of data analysis in pri­mary research.

A full philosophical consideration of what constitutes a “theory” lies far beyond the scope of this book. Instead, I next frame my discussion of the interplay between theories and meta-analytic results in terms of the meta­phor of a “nomological net” (called a “nomological network” by Cronbach & Meehl, 1955). In this metaphor, the knots of the net represent constructs, and the webbing among the knots represent associations among the constructs. The coverage of the net represents the scope of the theory in terms of the phenomena the theory attempts to explain. Theory specifies expectations for this net in terms of what the knots are (i.e., what constructs are relevant); the webbing among the knots (i.e., what directions and magnitudes of associa-

tions among the constructs are expected); and the coverage of the net (i.e., what, when, and for whom the theory is applicable). Different theories may specify nets that differ in terms of their knots, webbing, and coverage; in fact, potentially infinite nets (theories) could be specified.14 Thus, theory informs your meta-analysis in the very fundamental ways of specifying the constructs you consider (i.e., your definition of constructs of interest), the associations you investigate (i.e., the effect sizes you meta-analyze), and the scope (i.e., breadth of samples and designs included) you include in your meta-analysis (i.e., the inclusion criteria; see Chapter 3).15

Having described how theory guides your meta-analysis, I next turn to how your meta-analysis can evaluate theories. I organize this consideration around the three pieces of the nomological net metaphor: constructs (knots), associations (webbing), and scope (coverage). Following this consideration of how meta-analysis can evaluate theories, I then turn to the topic of model evaluation and building with multivariate meta-analysis.

1. Evaluating Variables and Constructs to Inform Concepts

It is useful to consider the indirect way by which theories inform measure­ment in science (for more in-depth treatments, see, e.g., Britt, 1997; Jaccard & Jaccoby, 2010). When theories describe things, the things that they describe are concepts. Concepts are the most abstract representation of something— the ideas we hold in our minds that a thing exists. For example, any layperson will have a concept of what aggression is. Well-articulated theories go further than abstract concepts to articulate constructs, which are more specifically defined instances of the concept. For example, an aggression scholar might define the construct of aggression “as behavior that is aimed at harming or injuring another person or persons” (Parke & Slaby, 1983, p. 550). Such a definition of a construct is explicit in terms of what lies within and outside of the boundaries (e.g., an accident that injures someone is not aggression because that was not the “aim”). Constructs might be hierarchically orga­nized; for instance, the construct of “aggression” might encompass more spe­cific constructs such as “relational aggression” and “overt aggression” such as I consider in the illustrative example of this chapter. Theories may differ in terms of whether they focus on separable lower-order constructs (within the nomological net metaphor: multiple knots) or singular higher-order con­structs (a single, larger knot in the net).

Despite their specificity, constructs cannot be directly studied. Instead, a primary research study must use variables, which are rules for assigning numbers that we think reasonably capture the level of the construct. These variables might be single items (e.g., frequency of punching) or the aggrega­tion of multiple items (frequency of punching, calling names, and spreading rumors). They may have either meaningful (e.g., number of times observed in a week) or arbitrary (e.g., a 5-point Likert-type scale) metrics. They may have different levels of measurement, ranging from continuous (e.g., number of times a child is observed enacting aggression), to ordinal (e.g., a child’s aver­age score among multiple Likert-type items), to dichotomous (e.g., the pres­ence versus absence of a field note recording a child’s aggression). Regard­less, variables are the researcher’s rule-bound system of assigning values to represent constructs. However, there are an infinite number of variables (i.e., ways of assigning values) that could represent a construct, and every primary study will need to select a limited subset of these variables.

Meta-analysis is a powerful tool to evaluate variables and constructs to inform theoretical concepts. As mentioned, any single primary study must select a limited subset of variables; however, the collection of studies likely contains a wider range of variables. Meta-analytic combination of these mul­tiple studies—each containing a subset of variables representing the con­struct—will provide a more comprehensive statement of the construct itself. This is especially true if (1) the individual studies use a small subset of vari­ables, but the collection of studies contains many subsets with low overlap so as to provide coverage of many ways to measure the construct; and (2) you correct for artifacts so as to eliminate less interesting heterogeneity across methods of measurement (e.g., correcting for unreliability). Tests of modera­tion across approaches to measuring variables can also inform whether some approaches are better representations of the construct than others.

Furthermore, meta-analysis can clarify the hierarchical relations among constructs by informing the magnitude of association among constructs that might be theoretically separable (or not). For example, I provided the exam­ple of a hierarchical organization of the construct of aggression, which might be separated into relational and overt forms (i.e., two lower-order constructs) on theoretical grounds. Meta-analysis can inform whether the constructs are indeed separate by combining correlations from studies containing variables representing these constructs. If the correlation is not different from 1.0 (or —1.0 for constructs that might be conceptualized as opposite ends of a single continuum), then differentiation of the constructs is not supported; however, if the confidence intervals of the correlation do not include ±1.0, then this is evidence supporting their differentiation.16 For instance, in the full, artifact- corrected meta-analysis of 98 studies reporting associations between rela­tional and overt aggression (this differs from the limited illustrative example above; see Card et al., 2008), we found an average correlation of .76 with a 95% confidence interval ranging from .72 to .79, supporting the separate nature of these two constructs.

2. Evaluating Associations

As I mentioned in Chapter 5, the most common effect sizes used in meta­analyses are two variable associations, which can be considered between two continuous variables (e.g., r), between a dichotomous grouping variable and a continuous variable (e.g., g), or between two dichotomous variables (o). These associations represent the webbing of the nomological net.

If well-articulated, theories should offer hypotheses about the presence, direction, and strength of various associations among constructs. These hypotheses can directly be tested in a meta-analysis by combining all avail­able empirical evidence. Meta-analytic synthesis provides an authoritative (in that it includes all available empirical evidence) and usually precise (if a large number of studies or studies with large samples are included) estimate of the presence, direction, and magnitude of these associations, and thus play a key role in evaluating hypothesized associations derived from a theory. If you correct for artifacts (see Chapter 6), then it is possible to summarize and evaluate associations among constructs, which are more closely linked to theoretically derived hypotheses than potentially imperfectly measured variables, as I described earlier.

A focus on associations can also help inform the structure of constructs specified by theories. I described in the previous section how meta-analysis can be used to evaluate whether lower-order constructs can be separated (i.e., the correlation between them is smaller than ±1.0). Meta-analysis can also tell us if it is useful to separate constructs by evaluating whether they dif­ferentially relate to other constructs. If there is no evidence supporting dif­ferential relations to relevant constructs,17 then the separation is not useful even if it is possible (i.e., even if the correlation between the constructs is not ±1.0), whereas differential associations would indicate that the separation of the constructs is both possible and useful. In the meta-analysis of relational and overt aggression, my colleagues and I evaluated associations with six constructs, finding differential relations for each and thus supporting the usefulness of separating these constructs.

Most meta-analyses will only evaluate one or a small number of these hypotheses. Because most useful theories will specify numerous associations (typically more than could be evaluated in a single meta-analysis), a single meta-analysis is unlikely to definitively confirm or refute a theory. Through many separate meta-analyses evaluating different sections of the webbing of the net, however, meta-analysis provides a cumulative approach to gathering evidence for or against a theory.

3. Evaluating Scope

In the metaphor of the nomological net, the coverage (size and location) of the net represents the scope of phenomena the theory attempts to explain. As I mentioned in Section 12.3.2, a series of meta-analyses can inform empirical support for a theory across this scope, thus showing which sections of the net are sound versus in need of repair.

Meta-analysis can also inform the scope of a theory through moderator analyses. As you recall from Chapter 9, moderator analyses tell us whether the strength, presence, or even direction of associations differs across differ­ent types of samples and methodologies used by studies. Theories predicting universal associations would lead to expectations that associations (i.e., the webbing in the net) are consistent across a wide sampling or methodological scope, and therefore moderation is not expected.18 If moderation is found through meta-analysis, then the theory might need to be limited or modified to account for this nuance in scope. In contrast, some theories explicitly pre­dict changes in associations.19 Evaluating moderation within a meta-analysis, in which studies may vary more in their sample or methodological features than is often possible in a single study, provides a powerful evaluation of the scope of theories. However, you should still be aware of the samples and methodologies represented among the studies of your meta-analysis in order to accurately describe the scope that you can evaluate versus that which is still uncertain.

4. Model Building and Evaluation

Perhaps the most powerful approach to comparing competing theories is to evaluate multivariate models predicted by these theories. Models are portray­als of how multiple constructs relate to one another in often complex ways. Within the metaphor of the nomological net, associations can be said to be small pieces consisting of a piece of webbing between two knots, whereas models are larger pieces of the net (though usually still just a piece of the net) consisting of several knots and the webbing among them. Because virtu­ally all contemporary theorists have knowledge of a similar body of existing empirical research, different theories will often agree on the presence, direc­tion, and approximate magnitude of a single association.20 However, theories often disagree as to the relative importance or proximity of causation among the constructs.

These disagreements can often be explicated as competing models, which can then be empirically tested. After specifying these competing models, you then use the methods of multivariate meta-analysis to synthesize the avail­able evidence as sufficient data to fit these competing models (as described earlier in this chapter). Within these models, it is possible to compare relative strengths of association to evaluate which constructs are stronger predictors of others and to pit competing meditational models to evaluate which con­structs are more proximal predictors than others. Such model comparisons can empirically evaluate the predictions of competing theories, thus provid­ing relative support for one or another. However, you should also keep in mind that your goal might be less about supporting one theory over the other than about reconciling discrepancies. Toward this goal, meta-analytic mod­erator analyses can be used to evaluate under what conditions (of samples, methodology, or time) the models derived from each theory are supported. Such conclusions would serve the function of integrating the competing the­ories into a broader, more encompassing theory.

In the structural equation modeling literature, it is well known that a large number of equivalent models can fit the data equally well (e.g., Mac- Callum, Wegener, Uchino, & Fabrigar, 1993). In other words, you can evalu­ate the extent to which a particular model explains the meta-analytically derived associations, and even compare multiple models in this regard, but you cannot conclude that this is the only model that explains the associa­tions. Because multivariate meta-analytic synthesis provides a rich set of associations among multiple constructs—perhaps a set not available in any one of the primary studies—these data can be a valuable tool in evaluating alternate models. Although I discourage entirely exploratory data mining, it is useful to explore alternate models that are plausible even if not theoreti­cally derived (as long as you are transparent about the exploratory nature of this endeavor). Such efforts have the potential to yield unexpected models that might suggest new theories. In this regard, meta-analysis is not limited to only evaluating existing theories, but can serve as the beginning of an inductive theory to be evaluated in future research.

Source: Card Noel A. (2015), Applied Meta-Analysis for Social Science Research, The Guilford Press; Annotated edition.

Leave a Reply

Your email address will not be published. Required fields are marked *