Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.

An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.

Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.

More For You

US President Donald Trump participates in an arrival ceremony at Beijing Capital International Airport during his visit to the country, in Beijing, China, on May 13, 2026.
REUTERS/Evan Vucci

Xi Jinping will welcome Donald Trump with lots of pomp and circumstance. The summit, though, will be short on substance.

- YouTube

Ian Bremmer breaks down the complicated reality inside Venezuela after Nicolás Maduro’s removal from power. While the Trump administration sees the operation as a major foreign policy victory, Ian argues the harder challenge is only beginning; turning Venezuela into a stable economy and a representative democracy.

Noam Bettan from Israel with the song "Michelle" are on stage at the 70th Eurovision Song Contest (ESC) during rehearsals for the first semi-final on May 12, 2026, in the Stadthalle.
Jens Büttner/dpa via Reuters Connect

Even Eurovision cannot escape geopolitics, South Africa’s constitutional court opens door to Ramaphosa impeachment vote, Zelensky’s former right-hand man accused in corruption probe