In this section, I discuss the basic structural sections of a manuscript and special considerations in reporting meta-analytic results within these sections. Two caveats are in order here. First, I expect that you are aware of the ways that manuscripts (whether primary studies or meta-analyses) are structured within your field, in terms of what the goals of each section are, expectations about typical length, and writing conventions (e.g., as described in the American Psychological Association, 2009, Publication Manual). Second, I want to point out that in many ways, reports of meta-analyses are not different from reports of primary research. Your goal is still to provide an empirically grounded exposition that adds meaningful knowledge to your field, and the manuscript reporting your meta-analysis should make this exposition in a similar way as you would when reporting results of a primary research study.
I next outline sections of a manuscript following a structure commonly found in social science research reports: the title, introduction, method, results, discussion, references, and appendices sections. Even if your field typically uses a different structure for reporting empirical findings, I believe that these suggestions will still be useful to consider and adapt to the reporting practices in your field.
As with any manuscript, the title of your meta-analysis should be an accurate and concise statement of your research goals, questions, or findings. Your title should therefore reflect the substantive focus of your review, which is reflected by the constructs comprising the effect sizes included in your metaanalysis. I think it is also preferable to indicate that your manuscript is a meta-analysis (or similar terms such as “meta-analytic review,” “quantitative review,” or “quantitative research synthesis”; see Chapter 1). Clearly denoting this is likely to draw the reader’s attention.
The introduction section of a report of a meta-analysis tries to accomplish the same goals as the introduction of any empirical paper: to provide a background in theory, methods, prior findings, or unresolved questions that orients readers to the goals, research questions, or hypotheses of your metaanalytic review. In presenting this case for a meta-analytic review, it is important to provide support for all aspects of your study selection and analyses. In terms of study selection, your introduction should make a clear case for why the population of studies—in terms of sample, measurement, design, and source characteristics—that you defined in your meta-analysis are important for study. Similarly, your introduction should provide a rationale for all of the analyses you report in the results section. For instance, providing evidence for a range of research findings could be useful in building the case for the uncertainty of typical findings and the need to combine these results in a meta-analysis to obtain a clearer understanding of these typical findings. If there is considerable variability in findings, as noted by previous scholars in your field and later in the findings of significant heterogeneity in your meta-analysis, then this is often motivation to perform moderator analyses (though see Chapters 8 and 9 for cautions). Of course, when you planned your meta-analysis, you made decisions about what study characteristics to code and eventually consider as moderators; you should describe the conceptual rationale for these potential moderators in your manuscript to ground and support the decisions to evaluate these moderators. In short, every decision you made in terms of defining a population of studies and analyses should be previously supported with a rationale in the introduction section of your manuscript.
The method section of your manuscript is where reporting practices become somewhat unique for meta-analyses versus primary research. Nevertheless, the same goals apply: to explain your research process in explicit enough detail that a reader fully understands what you have done to the point where he or she could, in principle, perfectly replicate your study (meta-analysis) based solely on what you have written. Next, I describe four general aspects of your methodology that you should report.
3.1. Literature Search Procedures
As I described in Chapter 3, the quality of a meta-analysis is substantially impacted by the extent to which the included studies adequately represent the population about which you wish to draw conclusions. The adequacy of this representation is in turn determined by the quality of your literature search. For this reason, it is important to explicitly describe your literature search procedures. For example, if you used electronic databases as one search strategy (and virtually every modern meta-analysis will), then it is important to detail the databases searched, the key words used (including wildcard characters), any logical operations (e.g., “and,” “or”), and the date of your last searches of these databases. You should provide similarly detailed descriptions of other search strategies (e.g., journals or conference programs searched and time span considered). Of course, it is preferable to provide brief rationales for these searches (e.g., “In order to identify unpublished studies . . . ”) rather than merely list your search strategies.
3.2. Study Inclusion and Exclusion Criteria
I mentioned in the previous subsection that the quality of a meta-analysis is impacted by whether the studies represent a population. This statement implies that the reader needs to have a clear idea of what the population is, which is defined by the inclusion and exclusion criteria you have specified. Therefore, it is critical that you clearly state your inclusion criteria that define the population of interest, as well as exclusion criteria that delineate the outer boundaries of what your population does not include. In Chapter 3, I suggested that, before searching the literature, you specify a set of inclusion and exclusion criteria. I also indicated that these criteria may need to be modified as you search the literature and begin coding studies as unexpected situations arise. In the method section of your report, you should fully detail these inclusion and exclusion criteria, specifying which criteria you specified a priori (before searching and coding) and which you specified post hoc (while searching and coding). I note here that these inclusion and exclusion criteria explicate the intended sampling frame of your meta-analysis (see Chapter 3); it will also be important to address how well the studies actually covered this sampling frame in the results section (see Section 13.2.4.a).
3.3. Coding of Study Characteristics and Effect Sizes
As you know by this point in your efforts, many decisions must be made while coding the studies that comprise your meta-analysis. It is important that you fully describe this coding process for readers. Three general aspects of the coding process that you should describe are the coding of study characteristics, the coding of effect sizes, and evidence of the reliability of your coding decisions.
As I described in Chapter 4, you could potentially code for a wide range of study characteristics in your meta-analysis. Whereas you have (or should have) provided a rationale for these study characteristics in the introduction section, here in the method section your task is to explicitly operationalize the characteristics you have coded. At a minimum, you should list the characteristics you coded, defining each term as necessary given the background of your audience and defining each of the possible values for each characteristic. For some characteristics (usually the “low-inference codes”; Chapter 4, Cooper, 2009a), this description can be very brief. For example, in describing “age” in the example meta-analysis I have described throughout this book, I might write “Age was coded as the mean age in years of the sample.” For other characteristics (especially “high-inference codes”; Chapter 4, Cooper, 2009a), the description may need to be considerably more extensive. For example, in describing the study characteristic “source of information” in this metaanalysis, it might (depending on the audience’s familiarity with these measurement practices) be necessary for me to write a sentence or two for each of the possible codes (e.g., “Self-reports were defined as any scale in which the child provided information about his or her own frequency of relational aggression, including paper-and-pencil questionnaires, responses to online surveys, and individual interviews”). Coding of even higher inference characteristics, such as “study quality” (see Chapter 4) might require multiple paragraphs. With many coded study characteristics, especially those requiring extensive descriptions, full description of all of these characteristics could take considerable space. Depending on the audience’s knowledge of your field and the space available in your publication outlet, it may be useful to present some of these details in a table or an appendix, or make them available upon request. However, the suggestion I offered earlier might be useful: When in doubt, err on the side of reporting too much rather than too little.
You should also describe your coding of effect sizes (Chapter 5) and any artifact corrections you perform (Chapter 6). In terms of describing your coding of effect sizes, you should be sure to answer three key questions. First, how do the signs of the effect size represent directions of results? For instance, in a meta-analysis of gender differences, it is important to specify whether positive effect sizes denote females or males scoring higher. Second, what effect size did you use and why? If you used a standard effect size (i.e., r, g or d, o), then it is usually sufficient to just state this (though you should keep the audience in mind). However, if you use an advanced or unique effect size (Chapter 7), you will usually need to further justify and describe this effect size. The third question you should be sure to answer is: How did you manage the various methods of reporting effects in the literature to obtain a common effect size? If you are writing to an audience that is somewhat familiar with meta-analysis, you can likely refer them to an external source (such as this book) for details of most computations. However, you should be especially clear about how you handled situations in which studies provided inadequate information. For example, did you assume the lower-bound effect size for studies reporting only that an effect was significant, and did you assume effect sizes of zero (or 1 for odds ratios) for studies reporting that an effect was nonsignificant? In these latter cases, it may be useful to report the percentage of effect sizes for which you made lower-bound estimates to give the reader a sense of the potential biasing effects.
Finally, you should provide evidence of the reliability of your coding, following the guidelines I offered in Chapter 4. Specifically, report how you determined reliability (intercoder and/or intracoder; number of studies doubly coded), and the results of these reliability evaluations. If reliabilities of coding decisions were very consistent across codes (i.e., various study characteristics and effect sizes), then it is acceptable to report a range; however, if there was variability, you should report reliabilities for each of your codes separately. If initial reliability estimates were poor and led to modification of your coding protocol, you should transparently report this fact. Finally, you should offer some evaluation of whether or not you believe the reliability of coding was adequate (if it was not, then it will be useful to address this limitation in the discussion section of your report).
3.4. Data-Analytic Strategy
Because meta-analytic techniques are unfamiliar to many readers in many fields, and because there are differences in analytic practices among different meta-analysts, it is important that you clearly state your data-analytic strategies. If extensive description is needed, I prefer to describe these strategies as a distinct subsection of the manuscript, usually at the end of the method section, but sometimes at the beginning of the results section (you should read some articles in your field that use meta-analytic techniques, or other advanced techniques that require description, to see where this material is typically placed). Alternatively, if you can adequately describe your techniques concisely, and many readers in your field are at least somewhat familiar with meta-analysis, then you might decide to omit this section and instead provide these details throughout the results section before you present the results of each analysis.
There are at least five key elements of your data-analytic strategy that you should specify. First, you should describe how you managed multiple effect sizes from studies (see Chapter 8). Second, you should specify which weights you used for studies in your meta-analysis (e.g., inverse squared standard errors; Chapter 8). If your audience is entirely unfamiliar with metaanalysis, you might also provide justification for these weights (see Chapter 8). Third, you should describe the process of analyzing the central tendencies of effect sizes. For instance, did you base your decision to use a fixed- versus random-effects model on the results of an initial heterogeneity test, or did you make an a priori decision to use one or the other (see Chapter 10)? Fourth, you should describe your process and method of moderator analyses. Specifically, you should describe (1) whether your decision to pursue moderator analyses was guided by initial findings of heterogeneity; (2) the order in which you evaluated multiple moderators (e.g., one at a time, all at once, or some conceptually-based sequence); (3) if you followed a sequence of moderator analyses, whether you used residual heterogeneity tests along the way to decide to continue or to stop; and (4) what approach to moderator analysis you used (e.g., ANOVA- or regression-based?). Finally, you should make clear how you evaluated potential publication bias (see Chapter 11).
As you might expect, the results section of the report contains some information unique from that in the results section of a primary study. At the same time, the underlying goal is the same in both: to accurately and clearly report the findings of your analyses to provide illumination of the research ques- tions/hypotheses that motivated the study/meta-analysis. In this section, I describe four pieces of information that will generally be present in your results. I do not necessarily intend to suggest how you should organize your results section; for a single, relatively simple meta-analysis, this organization might be useful, but for a more complex meta-analysis or a review with several meta-analyses, you will likely follow a more conceptual or methodological organization as I described earlier.
4.1. Descriptive Information
An important set of results, yet one that is often overlooked, is simply the description of the sample of studies that comprised your meta-analytic review. This information can often be summarized in a table, but the importance of this information merits at least a paragraph, if not an entire subsection, near the beginning of your results section. If your report includes multiple meta-analyses, it might be useful to report this descriptive information for both the overall collection of studies (i.e., all studies included in any of your meta-analyses) and the subsets of studies that comprised each meta-analysis.
Necessary descriptive information to report includes the number of studies (usually denoted by k), as well as the total number of participants in these studies (N, which is the sum of the Ns across the studies). I also strongly advise that you report the number of studies at different levels of coded study characteristics used in moderator analyses. For categorical characteristics, this is simply the number of studies with each value, whereas for continuous characteristics, you might report the means, standard deviations, and ranges. If your initial coding protocol included study characteristics that you ultimately did not use as moderators because of a lack of variability in values across studies, I suggest also reporting this information.
In addition to reporting this descriptive information, it is worth writing some comments about these data, as they describe both the sample for your meta-analysis and the state of the empirical literature in your field. For instance, it is useful to note if some values of your moderators are underrepresented in the existing literature (e.g., few studies have sampled certain types of individuals, few studies have used a particular methodology), or if certain combinations of moderators (e.g., particular methodologies with certain types of individuals) are underrepresented. It is also useful to comment on study characteristics that did not vary, and potentially to discuss the implications of this homogeneity in the discussion. In short, it is useful to describe the nature of the sample of studies (and by implication, the field of your meta-analysis), and to point out the sampling, measurement, and methodological strengths and shortcomings of this body of research.
4.2. Central Tendencies and Heterogeneity
Turning to the analytic results, most reports describe the results of central tendency and heterogeneity tests before the results of moderator analyses. Regarding central tendency, or (usually) mean effect sizes, you should clearly state whether the mean was obtained through fixed- or random-effects models, the standard error of this mean effect size, and the (typically 95%) confidence interval of this mean. Although the confidence interval generally suffices for significance testing, you might also choose to report the statistical significance of this effect size. In reporting these results, be sure to provide “words” that help readers make sense of the “numbers.” Put differently, avoid simply listing means, confidence intervals, and the like, but rather provide narrative descriptions of them. For instance, it might be useful to some readers to have the direction of association described (e.g., to interpret a positive mean correlation: “Higher levels of relational aggression are associated with higher peer rejection”), and it is usually useful to characterize the magnitude of effect sizes according to standards in your field or else commonly applied guidelines (e.g., Cohen, 1969, characterization of rs ~ ±.10, .30, and .50 as small, medium, and large, respectively).
In addition to the mean effect size, it is important to describe the heterogeneity of effect size to give readers a sense of the consistency versus variability as well as range of findings. Although you will almost certainly report the results of the heterogeneity test, the Q statistic described in Chapter 8 (Section 8.4), you should bear in mind the limits of this statistic given that it is a statistical significance test (i.e., it can have very high or low statistical power). For this reason, it may be useful to supplement reporting of the Q statistic with a description of the magnitude of heterogeneity. One possibility might be to describe quantitatively the magnitude of this heterogeneity by reporting the I2 index. Another possibility might be to visually display the heterogeneity using one of the figures I describe in Section 13.3. With either approach, it is important to describe (again, using words) this homogeneity or heterogeneity, and how this information was used in decisions regarding other analyses (e.g., to use random-effects models, to perform moderator analyses).
4.2. Moderator Analyses
If moderator analyses are conducted in your meta-analysis (and most metaanalyses will involve some moderator analyses), then it is important to fully report these results. Specifically, you should report the Q statistic, degrees of freedom, and significance level for each moderator analysis you perform (whether performed within an ANOVA or a regression framework; see Chapter 9). It is also common to report the within-group or residual heterogeneity (Q) remaining after accounting for this moderator or set of moderators. For categorical moderators with more than two levels, it is also necessary to report results of follow-up comparisons (see Chapter 9).
You should not stop at reporting only the significance tests of your moderator analyses; it is also important to report the numbers of studies and the typical effect sizes at various levels of the moderators. For a single categorical moderator this is straightforward: You simply report the numbers of studies and mean effect sizes within each of the levels of the moderator. For multiple categorical moderators, you should report the numbers of studies and mean effect sizes within the various combinations across the multiple moderator variables. For continuous moderators, it is not advisable to artificially categorize the continuous moderator variable and then report information (numbers of studies and mean effect sizes) within these artificial groups, though this practice is sometimes followed. Instead, I suggest using the intercept and regression coefficient(s) of your regression-based moderator analysis to compute predicted effect sizes at different levels of the moderator, and then report these predicted effect sizes across a range of the moderator variable values well-covered by the studies in your meta-analysis. In Chapter 9 (Section 9.2), I presented an example in which effect sizes of the association between relational aggression and peer rejection were predicted by (i.e., moderated by) the mean ages of the samples, and I computed the expected effect sizes for the ages 5, 10, and 15 years (intuitive values that represented the span of most studies in the meta-analysis).
Before concluding my suggestions for reporting moderator analysis results, I want to remind you of a key threat to moderator analysis in metaanalytic reviews: that the variable you have identified as the moderator is not the “true” moderator in that it is only associated with or serves as a proxy for the true moderator. If alternate potential moderators are study characteristics that you have coded, then it is important to report results either (1) ruling out these alternative explanations, or (2) showing that the variable you believe is the true moderator is predictive of effect sizes after controlling for the alternative moderator variables (see Section 9.4). You should report these findings in the results section. However, it is also worth considering that you can never definitively determine whether the moderator variable you have identified is the true moderator, or whether it simply serves as a proxy for another, uncoded study characteristic that is the true moderator. This is a limitation that should be considered in the discussion section of your report.
4.3. Diagnostic Analyses
Earlier (Chapters 2, 11) I described the widely known threat to meta-analyses (and all other literature reviews) posed by publication bias. Given that this threat is both widely known and potentially severely biasing to results of a meta-analysis, it is important to report evidence evaluating this threat. Specifically, you should report your efforts (1) to evaluate the presence of this threat, such as moderator analyses, funnel plots, or regression analyses; (2) show how plausible it is that there could exist enough missed literature with zero results so as to invalidate your conclusions (i.e., various failsafe numbers); and (3) and detail the approaches you used to correct for this potential bias (e.g., trim and fill, weighted selection) (see Chapter 11). After providing all available evidence regarding potential publication bias, you should offer the reader a clear statement of how likely publication bias may have impacted your findings.
The discussion section of your report should place the findings of your metaanalytic review in the context of your field. Whereas it is tempting to let the numbers speak for themselves, do not assume that they speak to the reader. Although the discussion section likely allows the most liberty in terms of writing (you can think of it as your opportunity to add the “qualitative finesse” that some critics have charged is absent from meta-analyses; see Chapter 2), you should consider including at least four components of this section. I discuss each of these components next in an order in which they commonly (though not necessarily) appear in discussion sections of meta-analytic reports.
5.1. Review of Findings
Although you should be careful to avoid extensive repetition of results in the discussion section, it is sometimes useful to provide a brief overview of key findings, especially if the results section was long, technical, or complex. It is useful to highlight the findings that you will most extensively discuss in this section, though you should certainly not omit findings that were unexpected or contradictory to your hypotheses (these are typically important to consider further).
5.2. Explanations and Implications of Findings
You should remember that the main purpose of your meta-analytic review was to answer some research questions, which presumably are important to your field in some way. The majority of your efforts in the discussion section should be directed to describing how your results provide these answers (when they do) and how these answers increase understanding within your field. For instance, do the findings of your review provide answers that support existing theory, support one theory over another, or suggest the need for refinement of existing theories in your field? Do the answers inform policy or practice in your field?
While providing answers to these questions is useful, you should also recognize the limits to the information provided by the existing research that comprised your review. This recognition can guide where more primary empirical research is needed, and it is important for your review to identify this need. For example, if you could not reach reasonably definitive conclusions to some of your research questions due to low statistical power (resulting from few studies or studies with small sample sizes), then you should state the need for further research to inform this question. Your descriptive summary of study characteristics also speaks to the types of studies that have not been performed (e.g., specific sample characteristics, measurement characteristics, etc., and combinations of these characteristics). Conversely, if you find that a large number of studies (or a number of studies with large samples) using very similar samples, measures, and the like, have been performed, and that the results are homogeneous and provide a very precise estimate of this effect size, then it is also valuable to state that more studies of this type are not needed (better that future research invest efforts toward providing new information). In short, I encourage you to remember that you have just spent months carefully studying and meta-analyzing nearly all of the work in the area of your meta-analysis, so you are in a very informed position to say where the field needs to go; it is a valuable contribution for you to make clear statements that guide these future efforts.
As when you are reporting the results of any empirical study, it is important for you to acknowledge the limitations of your meta-analytic review.
Some of these limitations may be the shortcomings of the available empirical basis, and I have already encouraged you to make clear statements of what these limitations are. Other limitations are particular to literature reviews (including meta-analyses), such as the limitations of drawing conclusions about moderator variables and potential publication bias. You should also make clear limitations to what can be inferred from the types of studies and effect sizes you have included in your meta-analysis. For instance, you should describe the limitations to inferring causality from effect sizes from concurrent naturalistic studies (see Chapter 2). For every limitation you identify, I encourage you to provide a rationale for why this limitation is more or less threatening to your conclusions, and how future research might resolve these issues (this piece of advice is relevant for any research report, not just those using meta-analyses).
Given the often high impact and broad readership of reports of meta-analyses, it is critical that your text conclude with a clear statement of how your metaanalytic review advances understanding, and why this advancement is important.
As with any other scholarly report, your meta-analytic review will include a list of references. Although typical practices vary across disciplines, I note two practices that are common in the field of Psychology (as described in the American Psychological Association, 2009, Publication Manual) and in many other areas social science. First, all of the studies included in your metaanalysis should be included in your reference list. Second, the first line of your reference section (after the “Reference” heading but before the first reference) should contain a statement such as “Studies preceded by an asterisk were included in the meta-analysis”; and then you should place an asterisk before the reference of the studies that were included in your meta-analytic review.
Different journals have different standards and preferences for material being included in the main body of the text, in appendices printed at the end of the article, or (more recently) in appendices available through the journal’s website. Depending on the practices of your targeted journal, however, it might be useful to consider using appendices for some of the lengthier information that is important to report yet not of interest to many readers. For instance, tables summarizing the coding of all studies included in your meta-analysis (see Section 13.3.2) are important because they allow readers to judge the completeness of your review and your coding practices; however, such tables are lengthy and often of peripheral interest to many readers. These tables might ideally be placed in an appendix rather than in the text proper.
Source: Card Noel A. (2015), Applied Meta-Analysis for Social Science Research, The Guilford Press; Annotated edition.
24 Aug 2021
25 Aug 2021
24 Aug 2021
24 Aug 2021
25 Aug 2021
25 Aug 2021