I spend a lot of time reading social media postings of various dietary tribes, and constantly see success stories of weight loss and a wide range of other health observations. To build on my last post on social media misinformers, the anecdote is a key feature of binding these groups together and reinforcing their dietary supremacy. Anecdotes about any particular diet’s efficacy are often trumpeted by highly motivated individuals in which the diet may align with their personal ideologies. But when thinking about health and nutrition in a scientific way, how should we internalize them? At extreme ends, people will interpret anecdotes in different ways. For example, if a particular diet helped someone lose more weight than compared to other diets in research trials, some will say that the diet can’t be appropriately studied in the research setting. On the other end, someone with a training in science might suggest we disregard anecdotes completely as unreliable. I’ve been pondering a lot lately how much weight we should give to diet anecdotes that we read online. In this post, I’ll try to argue why a mixture of both in the appropriate contexts is probably the best approach, and explore several cognitive biases that make us exaggerate the importance of anecdotes in the interpretation of health and nutrition.

We do scientific trials in nutrition because individual anecdotes cannot give us an accurate sense of how well something works. If I go on a weight loss diet and find success, I am not likely to try a different one while holding all other factors in my life the same to see if it really is the best. This is the beauty of randomized controlled trials: with enough people in the study, these others factors should average out and you can find the effect of the diet per se. However, in reality, in particular in nutrition studies, it is extremely difficult to get people to stay on a diet, especially more than a half year at a time. For this reason, such trials will underestimate the potential of a diet (on whatever outcome) for those that are self-motivated to comply to a diet, more so reflecting an average of a population (with the caveats that inclusion criteria aren’t overly restrictive and accounting for effects of being in a clinical trial) that is likely at least a little self-motivated to start a new diet. (For a recent example of a range of weight loss responses to being in a trial, see Fig. 1 of Supplement 2 in the DIETFITS study.) We can only truly find out how successful a diet is if we control as much as we can about many participants’ lives, and this is prohibitively expensive and inconvenient for more than short periods of time. So with this inherent uncertainty in nutrition science, we have to make some extrapolations on how to live. How reliable are individual observations?

We know anecdotes are not objective reflections of reality, which is why we developed the scientific process. That doesn’t mean they are useless. But there are a number of cognitive biases that cause some to put too much regard in them. To discuss a few: the perception bias causes us to misperceive how much of a particular behavior occurs in the broader population. If you see a number of people on a social media platform talking about a particular diet, you will likely overestimate how successful that diet will be outside of that bubble. These people have self-selected to a particular diet, so this further skews how we perceive the applicability of the diet to the broader population. Because social media relays uncontrolled memories of people who self-select diets, it creates an environment rife for self-selecting what information they share. Inaccurate recollecting of information is a well-known problem in research, even in a situation that is set up to be consistent in data collection methods. In social networks it will be much worse. Confirmation bias also can lead us to surround ourselves only with like-minded individuals that share information that supports our beliefs. Scientific research has defined methods to prevent this bias from influencing results, but Google “research” does not. In addition, people are likely to start a new dietary pattern when they have a health concern than when they are healthy. This not only fuels recall bias, but makes it difficult to tell whether the diet truly helped or if the condition would have resolved with other diets or doing nothing different at all (“regression to the mean”). In public spheres, there is a wax and wane of interest in particular diets that does not reflect the pace of research. For instance, see Google Trends for paleo vs ketogenic diets. There is a potential for bias in new areas of research with scientific inquiry, and this will also extend to internet communities with a much greater impact, exaggerating the perception of how successful the diet is in a variety of different applications.

There is also fairly good evidence that health myths can fuel physical conditions. For instance, see the history of MSG or likely to an extent, gluten. How many anecdotes of a particular fringe recommendation improving a condition are the result of the nocebo effect causing it in the first place? Finally, the use of weak evidence, or using argumentative techniques like strawmen or ad-hominems can manifest as rhetoric bias in online conversations that isn’t observed in scientific discourse. These may change our thinking if there is no serious reflection in how they are being misappropriated as logical argument. If everyone understood these biases, we could have much better public hypothesis-driven discussion about health and nutrition.

Education, however, does not immunize us completely from cognitive biases. In an annual lecture on cognitive biases and nutrition that I give to undergraduate dietetics students, I provide a pre-lecture quiz to demonstrate the power of cognitive biases. One question that I’ve found to be particularly effective is: “What percent of US households received SNAP (food stamp) assistance in 2013?” After this question, they are randomly shown one of two values: “Is it more or less than 5%?” OR “Is it more or less than 60%?”. This is a demonstration of the anchoring heuristic that tells us that giving an initial piece of information will influence our subsequent estimation/judgement. Year after year, those that I ask if it is more or less than 5% guess on average that about 20% of households receive SNAP (~13% is correct). But when I ask if it is more or less than 60%, the average guess is that nearly 50% of households receive SNAP! Simply by changing that starting value, I get drastically different responses. Education may help a little in some instances- there seems to be some protection against dietary recall bias between dietitians and non-dietitians, but still a large underreporting of energy intake in both.

So, when are anecdotes useful? When they are considered in a systematic and rigorous way. For instance, in clinical settings. After all, the plural of anecdote is in fact data, despite the misquotation that it is not. But, to be data of value, the conditions in which it is collected and documented need to be clear and sound. We can collect anecdotes with consistent and controlled methods to make them worth thinking about in pragmatic ways. For instance, the National Weight Loss Control Registry surveys people who report successfully losing weight and keeping it off long-term. Researchers can then do careful analyses on various behavioral and psychological factors that predict success. Biases still may exist in the representations of various diets (but can be mitigated with statistical analysis), but there is no selective promotion of various predictors without hard data. Interpreting the limited scientific evidence of nutrition trials on various diet patterns in the context of an individual patient’s overall habits, preference, lifestyle, etc. is a difficult task. Adherence is key, and in my opinion any diet within reason that someone can learn to adopt long-term shouldn’t be discouraged. We have serious health epidemics, and research and policy is slow and doesn’t quickly adapt to changing times. Recently I attended a talk by a scientist who has worked at several high level posts at various government agencies who argued that since what we are doing policy-wise isn’t really working, we should try community-based experiments first, then build policy based on what practices are successful. This position surprised me at first but it really isn’t in conflict with evidence-based practices assuming these collective anecdotes are well-documented and supervised.

When thinking about anecdotes, it is also important to consider both how much a particular diet deviates from a person’s normal diet, and what type of anecdote it is. When I read about people going on extreme diets like a recently trending “all meat” diet, it is not surprising to me in the least that people will report dramatic weight losses to start. If you restrict yourself to only eating one particular food group, and you are highly motivated to boot, and perhaps it aligns with an anti-authority view of healthcare, it is not a shock that one could lose a lot of weight. Consider the external cultural, behavioral, socio-economic, or other factors that may prevent adherence long-term. The more deviation from normality, the more difficult adherence can become. But anecdote seekers can find refuge in common communities which might increase motivation, in a sense creating their own normality. Is it a good or bad thing that people find success in such communities that also foment a lot of dietary misinformation? Most scientists I know lean toward the latter, though the most successful book promoters seem to have no issue embracing a little pseudoscience if it expands and amplifies the community. Of course, to appropriately put anecdotes in context we have to consider what type of anecdote it is too. For example, it is correct to dismiss someone claiming that a certain diet prevented them from getting cancer, because it is a chronic disease that may be decades in the making. But what do we do with something much more acute, like someone feeling better after they take out an additive from their diet? It is not likely that they meticulously self-tracked their reactions before and after doing so. Nocebo and placebo effects could be playing a role. They may have read a lot of anecdotes online and numerous biases lead them to conclude that it is a real effect. Because of all of these factors it is much more likely that these anecdotes are unreliable than that they could reflect some scientific unknown that has yet to be discovered. Yet, if they feel better and it isn’t harming their health does it matter? This balance between a desire to squash misinformation and help people improve their wellbeing can cause immense cognitive dissonance. What do we make of the “biohacker” culture that self-experiment to try to find what dietary or lifestyle factors work best for them? It is unlikely that they are contributing to scientific understanding in any meaningful way for the host of reasons discussed above, but if that is what keeps them motivated and interested in health, and if they are experimenting within reason, to what extent should their communities be left unchecked if they help others get interested as well? How can we better measure those that may be harmed? Can people eating more fat or more carbs who lose weight produce reliable anecdotes when so many other things about their diet, lives, interests, and motivations may have changed? We know some variation apart from measurement error exists between individuals in highly controlled experiments. Yet in nutrition, variability is compounded with the complexities of teasing apart many simultaneous dietary changes, compliance, faulty memories, and strong ideologies and biases. In other words, when do anecdotes in nutrition reflect true individual physiological responses to a diet, or just the other baggage?

For these reasons I try to value scientific evidence and anecdotes in very different ways, because each reflects very different things. We need to carefully understand what each of them means in terms of accuracy, applicability, and potential harm. The top-down application of research versus a bottom-up personal journey can both fit into the reality that we need many ideas to improve population health. est aproach