The use of systematic reviews to analyze nutrition research, set nutrition policy guidelines, and identify gaps and needs for future research

Do you ever dream of immersing yourself in the entire body of literature on your favorite nutrient? If so, you may be cut out to be a systematic reviewer. If you read the American Journal of Clinical Nutrition (AJCN), Nutrition Reviews, and even some of the high impact factor medical journals, you may have noticed more and more systematic reviews of the evidence (also called “evidence reviews” and sometimes “meta-analyses”) on various nutrition questions. How does a systematic review differ from a non-systematic review? What is involved in conducting a systematic review, what are the challenges, and what role does systematic review and meta-analysis play in guiding nutrition policy?

What makes a review systematic?

Systematic review and meta-analysis—a statistical method for combining (pooling) data from multiple studies—have been household names in social science and health care research for decades, and the process itself may date back a century. Yet their application to nutrition has emerged—and grown –only over the past 15 years, and along with that growth, some controversy about their use and interpretation has emerged. The hallmarks of systematic reviews are their rigor, adherence to a fairly standardized set of methods, and transparent reporting of every detail of the process:

  1. Developing a conceptual framework to put the topic or question into perspective
  2. Defining the relevant populations, interventions or exposures, comparison groups (controls), outcomes, time frames, and study designs (PICOTS, for short) for studies to be included in the review.
  3. Designing and conducting literature searches to try to identify all studies that potentially meet the inclusion criteria.
  4. Reviewing the titles and abstracts identified by the literature search, usually performed by two independent reviewers, for studies that seem relevant.
  5. Screening full texts of all publications conditionally accepted based on titles to ascertain which ones really meet all inclusion criteria and occasionally conducting additional searches with new terms if some glaring omissions are noted.
  6. Abstracting data from accepted studies—both study-level data (such as descriptions of the population, study setting, interventions, and outcome measures) and outcome data (findings). Study-level factors are critical to help understand differences in the findings from one study to another (called study heterogeneity).
  7. Assessing individual study quality, including methods for recruiting and randomizing participants, blinding to group assignment, and adherence and dropout rates.
  8. Synthesizing the data—either quantitatively via meta-analysis or qualitatively via narrative descriptions of studies—and assessing the amount and sources of heterogeneity.
  9. Drawing conclusions and identifying research gaps—areas with no or few studies.
  10. Assessing the strength of the evidence underlying each conclusion using a method like Grading of Recommendations Assessment, Development and Evaluation (GRADE), which considers factors like the overall number, size, and quality of the studies; whether they directly or only indirectly answer the question; and heterogeneity.

Many of these steps are guided by input from technical experts and other interested parties (such as policymakers, research planners, insurers, consumers, and patients), with the leaders of the review taking care to exclude or balance biased positions among the stakeholders. Finally, every detail of the review—including the reasons for excluding studies—is painstakingly documented, so the review can be duplicated, compared to others, and updated with new evidence.

Join ASN!

And get access to immersive learning experiences, collaboration, and networking with the greatest minds in nutrition.

Apply for Membership

The challenges of using systematic reviews to answer nutrition questions

To test the utility of the systematic review process to answer nutrition questions, the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice program selected the evidence-base practice center (EPC) in which I work and two others in 2002 to conduct a series of systematic reviews of the evidence of the health effects of omega-3 fatty acids. Among the challenges we described in a lessons-learned paper in AJCN:

  • Establishing the appropriate conceptual frameworks when mechanisms are unclear
  • Defining—and identifying studies that include—the right populations
  • Identifying valid measures of nutrient intake, exposure and change in status
  • Determining relevant and valid outcome measures
  • Weighing the advantages and disadvantages of including observational studies, since they cannot demonstrate cause and effect
  • Managing the astronomically large numbers of potentially relevant publications identified in literature searches
  • Assessing study quality

The challenges of using systematic reviews to set nutrition guidelines

Responding to the myriad of challenges we identified, AHRQ’s Evidence-based Practice program prioritized optimizing methods for conducting systematic reviews in nutrition. And soon thereafter, the National Academies of Sciences (NAS) commissioned the program to conduct the first systematic review to support the Dietary Reference Intake (DRI) committee in updating the calcium, vitamin D, and phosphorus DRIs. Our EPC has since conducted additional systematic reviews on vitamin D, omega-3 fatty acids, and just recently, a review to support the update of the sodium and potassium DRIs. Despite the 12 years that have elapsed since our lessons-learned paper was published, many of the same challenges remain, less attributable to the review process than to the nutrition research process itself. And as the U.S. and Canadian governments have committed to considering the role of nutrition in chronic disease outcomes in updating the DRIs, the challenges have grown. Randomized controlled trials (RCTs) are considered the gold standard of research design in lab animal research and human drug research, because they minimize sources of potential bias, ensuring (one hopes) that any effects are attributable to the experimental intervention. And systematic reviews are best suited for pooled analysis of RCTs. But human nutrition RCTs are extremely challenging and costly to conduct, difficult to interpret because of the many potential identifiable and unidentifiable sources of bias, and typically of short duration, limiting the kinds of questions they can answer. Chronic diseases often take decades to develop. Efforts to identify valid short-term indicators for heightened disease risks that respond appropriately to nutritional interventions have been challenging at best, in spite of exhaustive efforts. In contrast, observational studies can follow cohorts for decades. Yet assessing the relevant baseline dietary exposures and identifying and controlling for all potential sources of bias is virtually impossible in observational studies. In addition, assessment methods thought to be valid at one time may subsequently be found useless. Nonetheless, these studies dominate human nutrition research, and not a week goes by without one of their seemingly definitive findings making headlines.

Should systematic reviews continue to be incorporated into the nutrition guideline-setting process?

That the 2020 Dietary Guidelines should include systematic reviews of the evidence, as recommended by a recent NAS report, is unquestionable. Systematic reviews provide the only transparent, replicable, systematic methods for assessing the evidence and identifying the research gaps. Even if, as some experts recommend, nutritionists conduct large, well-controlled, relatively long-term RCTs with valid intermediate outcome measures for chronic health risks (such as the DASH Sodium trial, a large study designed to assess the effects of improving overall dietary quality and lowering sodium intake) to support future guidelines, evidence reviews will still be needed. However, these reviews need to be conducted—and incorporated into guidelines—thoughtfully, with their limitations and challenges in mind. Fortunately, our machine-learning colleagues continue to refine algorithms to manage the increasingly voluminous bodies of research and may soon let the findings of randomized controlled trials and prospective cohort studies be combined. Incorporating machine learning tools into the literature search process has already demonstrably reduced the amount of human effort needed to systematically review the ever-growing nutrition literature, and will, I hope, continue to keep our heads above water.

Exceptional Science & Inspiring Speakers

Get access to over 60 hours of the best science and latest clinical information at your convenience.

Learn More & Get Access