Courting AI opportunities (and hallucinations)

A view of the US Supreme Court, in Washington, D.C., on Monday, Jan. 8, 2024.

A view of the US Supreme Court, in Washington, D.C., on Monday, Jan. 8, 2024.

Graeme Sloan/Sipa USA via Reuters

The AI boom, he said, brings both opportunities and concerns, noting that legal research may soon be “unimaginable” without the assistance of AI. “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike,” he wrote.

But he also cautioned “humility,” noting how “one of AI’s prominent applications made headlines this year for a shortcoming known as ‘hallucination,’ which caused the lawyers using the application to submit briefs with citations to nonexistent cases. (Always a bad idea.)” Indeed, AI chatbots tend to make stuff up — or hallucinate, which has been a repeated problem since the debut of ChatGPT.

So far, US federal courts have taken a decentralized approach, with 14 of 196 publishing their own gruidance on how AI tools can and cannot be used in litigation.

Meanwhile, across the pond, the United Kingdom recently took the first step toward allowing AI as an assistive tool in legal opinion writing. “Judges do not need to shun the careful use of AI,” high-ranking judge Geoffrey Vos wrote. “But they must ensure that they protect confidence and take full personal responsibility for everything they produce.”

So British courts will begin allowing AI to be used in legal writing, but not research — because of the aforementioned tendency to hallucinate.

Will AI judges take over? Roberts made an eloquent case against this and an impassioned defense of the humanity central to being an effective judge.

“Machines cannot fully replace key actors in court,” he wrote. “Judges, for example, measure the sincerity of a defendant’s allocution at sentencing. Nuance matters: Much can turn on a shaking hand, a quivering voice, a change of inflection, a bead of sweat, a moment’s hesitation, a fleeting break in eye contact. And most people still trust humans more than machines to perceive and draw the right inferences from these clues.”

More from GZERO Media

Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to talk about the risks of recklessly rolling out powerful AI tools without guardrails as big tech firms race to build “god in a box.”

- YouTube

The next leap in artificial intelligence is physical. On Ian Explains, Ian Bremmer breaks down how robots and autonomous machines will transform daily life, if we can manage the risks that come with them.

Britain's Prime Minister Keir Starmer is flanked by Ukraine's President Volodymyr Zelenskiy and NATO Secretary-General Mark Rutte, Denmark's Prime Minister Mette Frederiksen and Dutch Prime Minister Dick Schoof as he hosts a 'Coalition of the Willing' meeting of international partners on Ukraine at the Foreign, Commonwealth, and Development Office (FCDO) in London, Britain, October 24, 2025.
Henry Nicholls/Pool via REUTERS

As we race toward the end of 2025, voters in over a dozen countries will head to the polls for elections that have major implications for their populations and political movements globally.