Scroll to the top

OK, Doomer

Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.

Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, on Nov. 2, 2023.

Kirsty Wigglesworth/Pool via REUTERS

British PM Rishi Sunak hosted several world leaders, including UN Secretary-General António Guterres and US Vice President Kamala Harris, at last week’s AI Summit. But the biggest celebrity draw was his sit-down interview with billionaire Elon Musk — among the world’s richest men and the controlling force behind Tesla, SpaceX, and X, formerly known as Twitter.


Musk has long played it both ways on AI — he frequently warns of its “civilizational risks” while investing in the technology himself. Musk’s new AI company, xAI, notably released its first model to a group of testers this past weekend. (We don’t know much about xAI’s Grok yet, but Musk boasts that it has access to Twitter data and will “answer spicy questions that are rejected by most other AI systems.”)

Musk told Sunak he thinks AI will primarily be a “force for good” while at the same time warning that AI could be “the most destructive force in history.”

There’s a central tension in tech regulation ... between protecting against doomsday scenarios like the development of an all-powerful AI or one that causes nuclear destruction and the clear-and-present challenges confronting people now, such as algorithmic discrimination in hiring. Of course, regulators can try to solve both, but some critics have expressed consternation that too much time and energy is being spent catering to long-term threats while ignoring the dangers right in front of our faces.

In fact, one of the focal points of the Bletchley Declaration, last week’s agreement brokered by Sunak and signed by 28 countries including the US and China, is the potential for “catastrophic harm” caused by AI. Even US President Joe Biden — whose executive order did more to tackle the immediate challenges of AI than the UK-brokered declaration did — said he became much more concerned about AI after watching the latest “Mission Impossible” film, which features a murderous AI.

The thing is, the two sets of concerns – coming catastrophe vs. today’s problems – are not mutually exclusive. MIT professor Max Tegmark recently said that the people focused on looming catastrophe need to speed up their thinking a bit. “Those who are concerned about existential risks, loss of control, things like that, realize that to do something about it, they have to support those who are warning about immediate harms … to get them as allies to start putting safety standards in place.”

GZEROMEDIA

Subscribe to GZERO's daily newsletter