OK, Doomer

Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.
Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.
Kirsty Wigglesworth/Pool via REUTERS

British PM Rishi Sunak hosted several world leaders, including UN Secretary-General António Guterres and US Vice President Kamala Harris, at last week’s AI Summit. But the biggest celebrity draw was his sit-down interview with billionaire Elon Musk — among the world’s richest men and the controlling force behind Tesla, SpaceX, and X, formerly known as Twitter.

Musk has long played it both ways on AI — he frequently warns of its “civilizational risks” while investing in the technology himself. Musk’s new AI company, xAI, notably released its first model to a group of testers this past weekend. (We don’t know much about xAI’s Grok yet, but Musk boasts that it has access to Twitter data and will “answer spicy questions that are rejected by most other AI systems.”)

Musk told Sunak he thinks AI will primarily be a “force for good” while at the same time warning that AI could be “the most destructive force in history.”

There’s a central tension in tech regulation ... between protecting against doomsday scenarios like the development of an all-powerful AI or one that causes nuclear destruction and the clear-and-present challenges confronting people now, such as algorithmic discrimination in hiring. Of course, regulators can try to solve both, but some critics have expressed consternation that too much time and energy is being spent catering to long-term threats while ignoring the dangers right in front of our faces.

In fact, one of the focal points of the Bletchley Declaration, last week’s agreement brokered by Sunak and signed by 28 countries including the US and China, is the potential for “catastrophic harm” caused by AI. Even US President Joe Biden — whose executive order did more to tackle the immediate challenges of AI than the UK-brokered declaration did — said he became much more concerned about AI after watching the latest “Mission Impossible” film, which features a murderous AI.

The thing is, the two sets of concerns – coming catastrophe vs. today’s problems – are not mutually exclusive. MIT professor Max Tegmark recently said that the people focused on looming catastrophe need to speed up their thinking a bit. “Those who are concerned about existential risks, loss of control, things like that, realize that to do something about it, they have to support those who are warning about immediate harms … to get them as allies to start putting safety standards in place.”

More from GZERO Media

- YouTube

On Ian Explains, Ian Bremmer breaks down how the US and China are both betting their futures on massive infrastructure booms, with China building cities and railways while America builds data centers and grid updates for AI. But are they building too much, too fast?

Elon Musk attends the opening ceremony of the new Tesla Gigafactory for electric cars in Gruenheide, Germany, March 22, 2022.
Patrick Pleul/Pool via REUTERS/File Photo

$1 trillion: Tesla shareholders approved a $1-trillion pay package for owner Elon Musk, a move that is set to make him the world’s first trillionaire – if the company meets certain targets. The pay will come in the form of stocks.

Brazil's President Luiz Inácio Lula da Silva and Germany's Chancellor Friedrich Merz walk after a bilateral meeting on the sidelines of the UN Climate Change Conference (COP30), in Belem, Brazil, on November 7, 2025.
REUTERS/Adriano Machado

When it comes to global warming, the hottest ticket in the world right now is for the COP30 conference, which runs for the next week in Brazil. But with world leaders putting climate lower on the agenda, what can the conference achieve?