Practical Matters: Avoiding Common Problems in Reporting Results of Meta-Analyses

In this section, I identify 10 problems that I perceive to be common in report­ing results of meta-analytic reviews. More importantly, I offer concrete sug­gestions for how you can avoid each. Although following these suggestions will not guarantee that your meta-analytic report will be successful (whether defined by publication in a top-outlet, high-impact, or any other criterion), doing so will help you avoid some of the most common obstacles.

  1. Disconnecting conceptual rationale and data analyses. One of the more common problems with written reports of meta-analyses (and probably most empirical papers) is a disconnect between the conceptual rationale for the review in the introduction and the analyses and results actually presented. Every analysis performed should be performed for a reason, and this reason should be described in the Introduction of your paper. Even if some analyses were entirely exploratory, it is better to state as much rather than have read­ers guess why you performed a particular analysis. A good way to avoid this problem is simply to compile a list of analyses presented in your results sec­tion, and then identify the section in your introduction in which you justify this analysis.
  2. Providing insufficient details of methodology. I have tried to emphasize the importance of describing your meta-analytic method in sufficient detail so that a reader could—at least in principle—replicate your review. This level of detail requires extensive description of your search strategies, inclusion and exclusion criteria, practices of coding both study characteristics and effect sizes, and the data-analytic strategy you performed. Because it is easier to know what you did than to describe it, 1 strongly recommend that you ask a colleague familiar with meta-analytic techniques to review a draft of your description to determine if he or she could replicate your methodology based only on what you wrote.
  3. Writing a phone book. Phone books contain a lot of information, but you probably do not consider them terribly exciting to read. When presenting results of your meta-analysis, you have a tremendous amount of information to potentially present: results of many individual studies, a potentially vast array of summary statistics about central tendency and heterogeneity of effect sizes, likely a wide range of nuanced results of moderator analyses, analyses addressing publication bias, and so on. Although it is valuable to report most or all of these results (that is one of the main purposes of sharing your work with others), this reporting should not be an uninformative listing of num­bers that fails to tell a coherent story. Instead, it is critical that the numbers are embedded within an understandable story. To test whether your report achieves this, try the following exercise: (1) Take what you believe is a near­complete draft of your results section, and delete every clause that contains a statistic from your meta-analysis or any variant of “statistical significance”; (2) read this text and see if what remains provides an understandable nar­rative that accurately (if not precisely) describes your results. 1f it does not, then this should highlight to you places where you should better guide read­ers through your findings.
  4. Allowing technical complexity to detract from message. Robert Rosenthal once wrote, “I have never seen a meta-analysis that was ‘too simple’ ” (Rosen­thal, 1995, p. 183). Given that Rosenthal was one of the originators of meta­analytic techniques (see Chapter 1) and has probably read far more meta­analytic reviews than you or I ever will, his insight is important. Although complex meta-analytic techniques can be useful to answer some complex research questions, you should keep in mind that many important questions can be answered using relatively simple techniques. I encourage you to use techniques that are as complex as needed to adequately answer your research questions, but no more complex than needed. With greater complexity of your techniques comes greater chances (1) of making mistakes that you may fail to detect, and (2) confusing your readers. Even if you feel confident in your ability to avoid mistakes, the costs of confusing readers is high in that they are less likely to understand and—in some cases—to trust your conclu­sions. The acronym KISS (Keep It Simple, Stupid) is worth bearing in mind. To test whether you have achieved adequate simplicity, I suggest that you (1) have a colleague (or multiple colleagues)—one who is unfamiliar with meta-analysis but is otherwise a regular reader of your targeted publication outlet—read your report; then (2) ask this colleague or colleagues to describe your findings to you. If there are any aspects that your colleague is unable to understand or that lead to inaccurate conclusions, then you should edit those sections to be understandable to readers not familiar with meta-analysis.
  5. Forgetting why you performed the meta-analysis. Although I doubt that many meta-analysts really forget why they performed a meta-analysis, the written reports often seem to indicate that they have. This is most evident in the discussion section, where too many writers neglect to make clear state­ments about how the results of their meta-analysis answer the research ques­tions posed and advance understanding in their field. Extending my earlier recommendation (problem 1 above) for ensuring connections between the rationale and the analyses performed, you should be sure that items on your list of analyses and conceptual rationales are addressed in the discussion section of your report. Specifically, be sure that you have clearly stated (1) the answers to your research questions, or why your findings did not provide answers, and (2) why these answers are important to understanding the phe­nomenon or guiding application (e.g., intervention, policy).
  6. Failing to consider the limits of your sample of studies. Every meta­analysis, no matter how ambitious the literature search or how liberal the inclusion criteria, necessarily involves a finite—and therefore potentially limited—sample of studies. It is important for you to state—or at least speculate—where these limits lie and how they qualify your conclusions. You should typically report at least some results evaluating publication bias (see Chapter 11), and comment on these in the discussion section. Evidence of publication bias does not constitute a fatal flaw of your meta-analysis if your literature search and retrieval strategies were as extensive as can be reasonably expected, but you should certainly be clear about the threat of publication bias. Similarly, you should clearly articulate the boundaries of your sample as determined by either inclusion/exclusion criteria (Chapter 3) or characteristics of the empirical literature performed (elucidated by your reporting of descriptive information about your sample of studies). Descrip­tion of the boundaries of your sample should be followed with speculation regarding the limits of generalizability of your findings.
  7. Failing to provide (and consider) descriptive features of studies. Problem 4 (allowing technical complexity to detract from your message) and problem 6 (failing to consider the limits of your sample) too often converge in the form of this problem: failing to provide basic descriptive information about the studies that comprise your meta-analysis. As mentioned, reporting this information is important for describing the sample from which you draw conclusions, as well as describing the state of the field and making recom­mendations for further avenues of research. The best way to ensure that you provide this information is to include a section (or at least a paragraph or two) at the beginning of your results section that provides this information.
  8. Using fixed-effects models in the presence of heterogeneity. This is a rather specific problem but one that merits special attention. As you recall from Chapter 10, fixed-effects models assume a single population effect size (any variability among effect sizes across studies is due to sampling error), whereas random-effects models allow for a distribution of population effect sizes. If you use a fixed-effects model to calculate a mean effect size across studies in the presence of substantial heterogeneity, then the failure to model this heterogeneity provides standard errors (and resulting confidence inter­vals) that are smaller than is appropriate. To avoid this problem, you should always evaluate heterogeneity via the heterogeneity significance test (Q; see Chapter 8) as well as some index that is not impacted by the size of your sam­ple (such as I2; see Chapter 8). If there is evidence of statistically significant or substantial heterogeneity, then you are much more justified in using a ran­dom- rather than a fixed-effects model (see Chapter 10 for considerations). A related problem to avoid is making inappropriately generalized conclusions from fixed-effects models; you should be careful to frame your conclusions according to the model you used to estimate mean effect sizes in your meta­analysis (see Chapter 10).
  9. Failing to consider the limits of meta-analytic moderator analyses. I have mentioned that the results of moderator analyses are often the most important findings of a meta-analytic review. However, you should keep in mind that findings of moderation in meta-analyses are necessarily correlational—that certain study characteristics covary with larger or smaller effect sizes. This awareness should remind us that findings of moderation in meta-analyses (or any nonexperimental study) cannot definitively conclude that the presumed moderator is not just a proxy for another moderator (i.e., another study char­acteristic). You should certainly acknowledge this limitation in describing moderator results from your meta-analysis, and you should consider alterna­tive explanations. Of course, the extent to which you can empirically rule out other moderators (through multiple regression moderator analyses control­ling for them; see Chapter 10) diminishes the range of competing explana­tions, and you should note this as well. To ensure that you avoid the problem of overinterpreting moderator results, 1 encourage you to jot down (separate from your manuscript) at least three alternative explanations for each mod­erator result, and write about those that seem most plausible.
  10. Believing there is a “right way” to perform and report a meta-analysis. Although this chapter (and other works; e.g., Clarke, 2009; Rosenthal, 1995) provides concrete recommendations for reporting your meta-analysis, you should remember that these are recommendations rather than absolute pre­scriptions. There are contexts when it is necessary to follow predetermined formats for reporting the results of a meta-analysis (e.g., when writing a com­missioned review as part of the Campbell [www.campbellcollaboration.org] or Cochrane [www.cochrane.org] Collaborations), but these are typically excep­tions to the typical latitude available in presenting the results of your review. This does not mean that you deceptively present your work, but rather that you should consider the myriad possibilities for presenting your results, keeping in mind the goals of your review, how you think the findings are best organized, the audience for your review, and the space limitations of your report. 1 believe that the suggestions 1 have made in this chapter— and throughout the book—are useful if you are just beginning to use meta­analytic techniques. But as you gain experience and consider how to best present your findings, you are likely to find instances where 1 have written “should” that are better replaced with “should usually, but . . . ”. I encourage this use of my (and others’) recommendations as jumping points for your efforts in presenting your findings.

Source: Card Noel A. (2015), Applied Meta-Analysis for Social Science Research, The Guilford Press; Annotated edition.

Leave a Reply

Your email address will not be published. Required fields are marked *