By Emma Partridge
Dietary tracking applications (apps) have become quite sophisticated over the years, moving from manual entering of a food and portion to using barcode scanners to identify brand name products and return nutritional content information based on an entered portion. However refined these apps have become, their most poignant issue may not lie in the accuracy of the nutritional content information returned, but in the accuracy of the user’s portion estimation. An analysis of misreporting on National Health and Nutrition Examination Surveys (NHANES) between 2003 and 2012 published in the British Journal of Nutrition found that under-reporting of energy intake was most prevalent in US adults 20 years or older. Specifically likely to under-report were women and overweight or obese subjects.1 The reality that under-reporting, conscious or subconscious, can happen in any subjective food recording process leads to questioning whether these types of apps are actually successful in their dietary tracking abilities, especially for overweight or obese people tracking their diets in attempts to lose weight. In a randomized control trial conducted at the Duke University Medical Center and published in Obesity, researchers found that overweight and obese young adults (18-35 years) were no more likely to lose weight using a smartphone app than the control group, who did not undergo any weight loss or health intervention.2 If we can reasonably determine that smartphone apps where one enters their food intake or receives social support don’t help the majority of overweight or obese people lose the weight they’re aiming to, how can this be improved? The latest technologies coming into play are image-assisted apps that allow users to submit photos of their meals then receive nutritional content based on the food and the portion size. Apps such as MealLogger allow the user to submit a photo of their meal, choose their portion size, and post the photo for others to view. While this form of social photo-sharing may skew users to acceptable portioning by social pressure, the user’s ability to choose their portion size still introduces under-reporting bias. Other apps rely on objective, but far broader, methods of extrapolating nutritional content from a food photo. Apps like MealSnap allow users to submit photos of their meal to have the MealSnap system “auto-magically detect the nutritional breakdown” of the meal, according to their Microsoft.com page. While this calorie estimate is likely rougher than one where users choose their portion, it is also objective and prevents under-reporting bias. Apps with more user input may fall victim to inaccuracies from under-reporting, while apps that avoid biased reporting may sacrifice accuracy for objectivity. To correct this, future technologies must undoubtedly continue to move toward a goal of improved accuracy and usability. Likely, these technologies will move toward advanced imaging, as imaging, finding ways to take in the real food, rather than relying on the user’s input. The future of image-assisted food technology will determine how close inventors and researchers can get to exact measurement of food and portion while maintaining accurate extraction of nutritional content. I, for one, am excited to see where it leads.