LLMs Struggle with Culture-Specific Health Misinformation on YouTube
A study reveals that LLMs trained on Western data fail to detect health misinformation in non-Western contexts, such as cow urine remedies on Indian YouTube. This highlights a critical gap in AI's ability to handle culturally nuanced content.

A new study published on arXiv examines the challenges Large Language Models (LLMs) face in detecting health misinformation on social media, particularly in non-Western contexts. Researchers analyzed 30 multilingual YouTube transcripts promoting gomutra (cow urine) as a remedy for constipation in India. They found that these videos blend sacred traditional language with pseudo-scientific claims, creating a rhetorical style that LLMs, primarily trained on Western data, struggle to interpret accurately.
The study underscores a significant limitation in current AI systems: their inability to discern culturally specific misinformation. While LLMs excel at debunking Western-style pseudo-science, they fail to recognize the unique blend of religious and scientific language used in many Global South health claims. This gap could have serious public health implications, as misinformation in these regions often goes unchecked by AI moderation tools.
Moving forward, the researchers call for more diverse training data for LLMs to better handle culturally nuanced content. They also suggest developing region-specific AI models that understand local languages and cultural contexts. Without these improvements, AI's role in combating health misinformation will remain limited, particularly in regions where traditional and modern beliefs intersect.