AI Moral Judgments: Does 'Thinking Mode' Change Decisions?
A new study found that AI models' moral judgments are mostly the same whether they respond instantly or take time to 'think'. The differences that do exist are concentrated in particularly tricky scenarios. This suggests that AI reasoning modes may not drastically alter ethical decisions.

Researchers tested five advanced AI models (including GPT 5.5 and Claude Sonnet 4.6) to see if their moral judgments change when they use a 'thinking mode' versus responding instantly. Across 100 scenarios, the AIs agreed about 78-79% of the time in both modes. However, in 21 particularly controversial cases, their agreement dropped to near random chance.
This matters because it shows that while AI models might seem more thoughtful in 'thinking mode', their core moral judgments don't change much. For everyday users, this means you can expect similar ethical responses whether an AI gives you an instant answer or takes time to 'ponder'. The study suggests that these reasoning modes might be more about presentation than fundamental change.
If you use AI for advice or decision-making, this research indicates that the mode you choose (instant vs. thinking) won't drastically alter the moral content of the response. However, it's worth noting that in truly gray-area questions, different models might still disagree significantly. Keep an eye out for updates as researchers continue exploring how these modes affect AI behavior.