AI Advice Shows Cultural Bias, Study Finds
A new study reveals that leading AI models give culturally biased advice, aligning more with individualistic than collectivist values. The research highlights significant discrepancies between AI responses and real-world cultural norms.

A recent study published on arXiv has uncovered that leading AI models, including Claude Sonnet 4.5, GPT-5.4, and Gemini 2.5 Flash, exhibit cultural biases in their advice. Researchers presented these models with ten real-life personal dilemmas, framed for users from 10 countries across 5 continents in 7 languages, resulting in 840 scored responses. The AI advice was then compared against data from the World Values Survey Wave 7, which measures what people in each country actually believe.
The study found that all three AI models tended to align more with individualistic values, which prioritize personal freedom and self-expression, rather than collectivist values that emphasize community and social harmony. This bias was consistent across various personal dilemmas, including career choices, marriage, and family conflicts. The discrepancy between AI responses and real-world cultural norms raises concerns about the universality of AI advice and its applicability across different cultural contexts.
The implications of this study are significant for the global deployment of AI systems. As AI assistants become more integrated into daily life, their cultural biases could lead to misunderstandings and inappropriate advice for users from collectivist cultures. Future research and development efforts must focus on mitigating these biases to ensure that AI systems can provide culturally sensitive and contextually appropriate advice. The study also calls for greater transparency in how AI models are trained and evaluated to better reflect the diverse values and beliefs of users worldwide.