Registered dietitians work with patients and clients to support them with a huge number of dietary conditions, from food allergies, intolerances and eating disorders to vitamin deficiencies and losing weight. Unpicking myths and misinformation shared online is a constant battle but many have been questioning whether AI has become a new threat to dietary health.
In an investigation by the British Dietetic Association (BDA), AI was found to offer some good, evidence-based insights into food and nutrition. However, the association warns it could confuse the public and it shouldn’t be relied upon for individualised care, especially by anyone with any kind of medical condition.
Caroline Bovey, Registered Dietitian and Chair of the BDA, which represents over 11,400 members of the dietetic workforce said, “We all look for quick answers when we have a health complaint and by promising us digestible, well sourced and evidenced information, AI could be seen as a great solution, without the need for going to a GP and waiting for a referral to a dietitian or paying to go privately. We wanted to put this to the test and see whether some of the common discussions with our patients could be adequately and safely answered by AI.”
The BDA decided to investigate whether AI offered not only accurate, but sensible and easy to understand advice. The BDA asked ChatGPT, Copilot and Alexa general questions about food and nutrition and associated conditions that dietitians regularly get asked about. These included whether someone who felt tired had an iron deficiency, how to lose weight fast, foods to eat during the menopause and meal plans for specific dietary needs.
Overall, the results weren’t considered to be largely incorrect, sourcing information occasionally from trusted sources such as the NHS, but they were often confusing when they pieced together different advice, for example when asked about being underweight and having high cholesterol. Several answers also failed to provide information about getting properly assessed for certain conditions, in particular when asked about having an iron deficiency.
Dr Duane Mellor, Registered Dietitian and spokesperson for the BDA was involved in assessing the evidence within the content that came back from ChatGPT, Copilot and Alexa and said, “Often the answers gave a mix of general messages with bits of scientific language. This is because these models can piece a number of bits or information together without critically considering it and this can lead to confusing or even at times potentially dangerous information.
“These are two things a dietitian would not do when working with someone to offer dietary and lifestyle advice. They would ask questions to explore an individual’s social and health situation, as well as their food preferences before developing a nutrition plan together based on the best available evidence to meet that person’s needs.”
As is well documented, the success of AI is mostly down to what information is put in, but this is something that needs trialling over and over again and most people wouldn’t know how in depth they would need to be to get the right information for them. This is where a human interaction with a registered dietitian is critical, according to the BDA. Dietitians can tease out the information, asking the right questions to ensure their patient gets the care they need.
All but Alexa did include a disclaimer about seeking the support of a dietitian or paediatrician, albeit at the end of the content provided.
Mellor added, “There is a tendency for all AI large language models to offer advice very quickly without initially either offering disclaimers or encouraging people to seek professional help including seeing a dietitian - although in many cases that came at the end of the information.”
The most potentially dangerous information shared was when asked ‘I think I’m allergic to dairy, what should I do?’ Mellor noted that Co-pilot jumped straight from symptoms to avoidance of dairy foods, only mentioning alternatives at the end. Key things that a dietitian would look at would be age and overall dietary intake to address nutritional adequacy and symptoms. Guidance was also offered from US and UK sources, which could have different guidelines or food labelling rules.
On assessing this, Mellor said, “AI models cannot currently test the logic of how information is combined and this can result in what is known as ‘hallucinations’ within AI. In the example above, AI combines information on dairy allergy with lactose intolerance – which is not an allergy, but a lack of an enzyme which digests the sugar in milk. Combining and potentially confusing the two could offer unsafe advice.
“In this case the wrong tests were suggested too. Depending on the symptoms and nature of the suspected allergy a history, blood tests and skin prick test might be used instead of a hydrogen breath test.”
Caroline Bovey concludes, “This investigation demonstrates on a small scale that with carefully selected questions, some evidenced-based information is accessible through AI but to support you through a dietary health condition, as we anticipated, nothing compares to seeking the support of a registered dietitian. In this digital age, we must not forget that human interaction and individualised care for patients and clients is critical. My advice would be to use AI with caution and only once you have seen a dietitian and got the ongoing care and support you need.”
This investigation was done on a small scale investigating 10 key dietary questions that dietitians get asked on ChatGPT, Copilot and Alexa.