Titikey
HomeNewsClaudeChatGPT, Claude, and Gemini Impress in the WSJ March Madness Bracket Challenge

ChatGPT, Claude, and Gemini Impress in the WSJ March Madness Bracket Challenge

3/26/2026
Claude

The Wall Street Journal secretly submitted matchup predictions from three leading large language models—ChatGPT, Claude, and Gemini—into its NCAA “March Madness” bracket pool, competing alongside a large number of human participants. The report notes that these “AI contestants” did not have an early edge, but as the tournament progressed, they began to pick more upsets and avoid following the crowd. Overall, their performance gradually surpassed that of many human entrants, with some even seen as “potential champions.”

Mechanically, bracket contests combine data with randomness, while human participants are often influenced by team loyalties, intuition, and emotion—factors that can lead to more homogeneous choices. By contrast, without a “home-team bias,” AI tends to make more differentiated decisions under informational constraints, which can create an advantage under certain rules. That said, these results also serve as a reminder: AI leading the pack does not necessarily mean it “understands basketball better,” and may instead reflect how humans are more prone to systematic bias in uncertain prediction tasks.

Commentary: As large models are applied to more prediction and decision-making scenarios, key questions will include how to distinguish “strategic advantage” from “true capability,” and how to establish interpretable, reproducible methods for comparison in competitions or evaluations.

HomeShopOrders