Can we trust AI to tell the truth?
Is it possible to create artificial intelligence that doesn't lie?
On GZERO World with Ian Bremmer, cognitive scientist, psychologist, and author Gary Marcus sat down to unpack some of the major recent advances–and limitations–in the field of generative AI. Despite large language model tools like ChatGPT doing impressive things like writing movie scripts or college essays in a matter of seconds, there’s still a lot that artificial intelligence can’t do: namely, it has a pretty hard time telling the truth.
So how close are we to creating AI that doesn’t hallucinate? According to Marcus, that reality is still pretty far away. So much money and research has gone into the current AI bonanza, Marcus thinks it will be difficult to for developers to stop and switch course unless there’s a strong financial incentive, like Chat Search, to do it. He also believes computer scientists shouldn’t be so quick to dismiss what’s known as “good old fashioned AI,” which are systems that translate symbols into logic based on a limited set of facts and don't make things up the way neural networks do.
Until there is a real breakthrough or new synthesis in the field, Marcus thinks we’re a long way from truthful AI, and incremental updates to the current large language models will continue to generate false information. “I will go on the record now in saying GPT-5 will [continue to hallucinate],” Marcus says, “If it’s just a bigger version trained on more data, it will continue to hallucinate. And the same with GPT-6.”
Watch the full interview on GZERO World, in a new episode premiering on September 8, 2023 on US public television.