Can we align AI with society’s best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. What if the way we deploy artificial intelligence, Harris argues, isn’t inevitable, but a choice?
The tradeoff between AI’s risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn’t inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path? Is there a path forward where innovation aligns with humanity?
“If we deploy AI recklessly in a way that causes AI psychosis or kids' suicides or degrades mental health or causes every kid to outsource their homework,” Harris warns, “it's very obvious the long-term trajectory of we are going to have a weaker civilization.”
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published
