We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI’s Altman incident under investigation
Two investigations may soon shed light on one of the biggest mysteries in Silicon Valley: Why was Sam Altman fired from OpenAI?
To recap, the OpenAI board fired Altman in November, saying he was not “consistently candid in his communications,” but it failed to provide specifics (the big mystery). OpenAI’s staff and lead investor, Microsoft, immediately protested the ouster and successfully campaigned for Altman’s reinstatement – and for fresh faces on the nonprofit board.
The US Securities and Exchange Commission is now investigating whether OpenAI misled its investors in firing Altman. Meanwhile, the law firm WilmerHale is conducting an internal investigation of the Altman firing and will soon present its findings to the current board of directors, which commissioned the review.
Altman’s alleged deceit may have something to do with his plans to raise trillions of dollars for a chip venture, something that’s come to light in the months since this debacle. We have our ear to the ground for where the investigations are headed, and what it could mean for the giant of genAI.2024 is the ‘Voldemort’ of election years, says Ian Bremmer
Critical elections are occurring across the globe this year, with a record number of people — roughly half the global population — set to head to the polls in dozens of countries.
During a Global Stage panel at the Munich Security Conference, Eurasia Group Founder and President Ian Bremmer described 2024 as the “Voldemort of election years.”
“Voldemort is the name that should not be spoken in the ‘Harry Potter’ series … This is the year that people have been very concerned about but have kind of hoped that they could push off,” says Bremmer. This is not just because there are so many elections occurring amid historic levels of distrust in key institutions, but also because the United States — the most powerful country in the world — is also “one of the most politically dysfunctional,” he explains.
Bremmer says the 2024 US presidential election is “maximally distrust-laden,” adding that this is “driving a level of concern that borders on panic from American allies all over the world.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith
At the Munich Security Conference, leading tech companies unveiled a new accord that committed them to combating AI-generated content that could disrupt elections.
During a Global Stage panel on the sidelines of this year’s conference, Microsoft Vice Chair and President Brad Smith said the accord would not completely solve the problem of deceptive AI content but would help “manage this new reality in a way that will make a difference and really serve all of the elections… between now and the end of the year.”
As Smith explains, the accord is designed to bring the tech industry together to preserve the “authenticity of content,” including via the creation of content credentials. The industry will also work to detect deepfakes and provide candidates with a mechanism to report them, says Smith, while also taking steps to “promote transparency and education.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How AI threatens elections ›
- How to protect elections in the age of AI ›
How to protect elections in the age of AI
Half of the world’s population will have the chance to head to the polls this year in dozens of critical elections worldwide. These votes, which will shape policy and democracy for years to come, come amid light-speed development in artificial intelligence. As Eurasia Group noted in its 2024 Top Risk entitled “Ungoverned AI,” generative AI could be used by domestic and foreign actors – we’re looking at you, Russia – to impact campaigns and undermine trust in democracy.
To meet the moment, GZERO Media, on the ground at the 2024 Munich Security Conference, held a Global Stage discussion on Feb. 17 entitled “Protecting Elections in the Age of AI.” We spoke with Brad Smith, vice chair and president of Microsoft; Ian Bremmer, president and founder of Eurasia Group and GZERO Media; Fiona Hill, senior fellow for the Center on the United States and Europe at Brookings; Eva Maydell, an EU parliamentarian and a lead negotiator of the EU Chips Act and Artificial Intelligence Act; Kersti Kaljulaid, the former president of Estonia; with European correspondent Maria Tadeo moderating. The program also featured interviews with Kyriakos Mitsotakis, Greece’s prime minister, and Benedikt Franke, CEO and vice-chair of the Munich Security Conference. These thought leaders and experts discussed the implications of the rapid rise of AI amid this historic election year.
The group started by delving into what Bremmer has referred to as the “Voldemort” of years surrounding elections, to look at how election interference and disinformation have evolved since 2016.
“This is the year that people have been very concerned about, but have kind of hoped that they could push off. It's not just because there are elections all over the world and trust in institutions is deteriorating, it's also because the most powerful country in the world, and it's not becoming less powerful, is also one of the most politically dysfunctional,” says Bremmer, referring to the US.
The 2024 US presidential election “is maximally distrust-laden,” says Bremmer, adding that it’s “really hard to have a free and fair election in the US that all of its population” believes is legitimate.
And the worry is that AI could complicate the landscape even further.
Hill agreed that there’s cause for concern but underscored that people should not “panic” to a point where they’re “paralyzed” and “not taking action.”
“Panic is not an option given the stakes,” says Hill, adding, “There are negative aspects of all of this, but there's also the kind of question that we have to grapple with is how when legitimate competitors or opposition movements that otherwise beleaguered decide to use AI tools, that then also has an impact.”
There’s no doubt that AI can be used for nefarious purposes. Deepfakes can fool even the most discerning eye. Disinformation has already been rampant across the internet in recent election cycles and helped sow major divisions in many countries well before AI tools — far more sophisticated than your average meme — were widely available.
“With new tools and products that use generative AI, including from a company like ours, somebody can create a very realistic video, audio, or image. Just think about the different ways it can be used. Somebody can use it and they can make a video of themself, and they can make clear in the video that this is AI generated. That is one way a political candidate, even one who is in prison can speak,” says Smith, alluding to ex-Pakistani Prime Minister Imran Khan’s recent use of AI from behind bars.
Along these lines, there are many serious, valid concerns about the impact AI can have on elections and democracy more generally — particularly at a time when people are exhibiting rising levels of distrust in key institutions.
“It's very important to acknowledge a lot of the important developments that AI and emerging tech can bring to support our economic development,” says Maydell, adding, “but in the same time, especially this year, we need to be very sober about some of those threats that are in a way threatening the very fabric of our democratic societies.
As Maydell noted, this evolving new technology can be harnessed for good and bad. Can AI be used as a tool to protect candidates and the integrity of the electoral process?
A number of major tech companies, including Microsoft, signed an accord at the Munich Security Conference on Friday to help thwart and combat AI-related election interference.
“It's all about trying to put ourselves in a position, not to solve this problem completely, I don't think that's possible, but to manage this new reality in a way that will make a difference,” says Smith. The Microsoft president says the accord brings the tech sector together to preserve the authenticity of content, including by working to detect deepfakes and providing candidates with a mechanism to report any that are created about them.
“We'll work together to promote transparency and public education. This clearly is going to require a lot of work with civil society, with others around the world to help the public be ready,” says Smith.
But is enough being done?
“It's good that both politicians and the companies and society as a whole now has a better understanding where this is all leading us and we are collectively taking actions,” says Kaljulaid, but this is just a “first step” and “next steps need to follow.”
A balance will need to be found between legislating the challenges presented by AI and giving tech companies space to collaborate, innovate and address problems on their own.
“Democracy is always in jeopardy. Every generation has to answer the call to defend it,” says Smith, adding, “Now it's our turn. It's our turn as a generation of people to say that technology always changes, but democracy is a value that we hold timeless. So let's do what it takes to defend it, to preserve and promote it.”
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, and technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- AI's potential to impact election is cause for concern - EU's Eva Maydell ›
- AI in 2024: Will democracy be disrupted? ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- How AI threatens elections ›
- At the Munich Security Conference, Trump isn't the only elephant in the room ›
- Ukraine crisis one of many global threats at Munich Security Conference ›
- 4 things to know about the Munich Security Conference ›
- Munich Security Conference 2024: What to expect ›
Hard Numbers: Bye-bye Bard, Arm’s up, Robots took my job, Super Bowl ad blitz
60: The British chip designer Arm Holdings is experiencing a market surge. The company’s stock saw a 60% increase after positive financial results and a rosy outlook. The company, which licenses its chip designs, attributes increased demand to the AI boom.
4,600: Artificial intelligence has already led to 4,600 layoffs in the US, according to the firm Challenger, Gray & Christmas. And that’s a conservative estimate. Unlike with robotics breakthroughs of yore, this wave of artificial intelligence seems laser-focused on displacing white-collar workers.
7 million: AI made its way into some of this year’s Super Bowl ads — 30-second commercials that sold for about $7 million. Etsy debuted its AI shopping assistant, Microsoft boasted its Copilot AI business tool, and Google highlighted how its Pixel 8 phone uses the technology to help blind people take photos.Live premiere today at 12 pm ET: Can we use AI to protect elections?
Today at 12 pm ET/9 am PT/6 pm CET, watch the live premiere of our Global Stage discussion at the Munich Security Conference, "Munich 2024: Protecting Elections in the Age of AI." 2024 is truly the “Year of Elections” with more than 75 nations heading to the polls, affecting roughly half the world’s population. But an ongoing decline of trust in institutions plus an explosion of AI tools and deep fake technologies could create a dangerous environment. Our panel will examine how AI can also be a way to protect consumers and candidates, helping to shore up the integrity of the electoral process. Can AI be used to quickly flag and even eliminate online lies and misinformation?
European correspondent Maria Tadeo moderates the conversation with an expert panel including:
- Ian Bremmer, President and Founder, Eurasia Group and GZERO Media
- Fiona Hill, Senior Fellow, Center on the United States and Europe, Brookings
- Kersti Kaljulaid, former President of Estonia
- Eva Maydell, Member of the European Parliament and lead negotiator, EU Chips Act and Artificial Intelligence Act.
- Brad Smith, Vice Chair and President, Microsoft
- Special appearances by Kyriakos Mitsotakis, Prime Minister of Greece, and Benedikt Franke, Vice-Chairman and CEO, Munich Security Conference
More about Global Stage:
Global Stage: Global issues at the intersection of technology, politics, and societyyoutu.be
Antitrust regulators zero in on AI
The watchful eyes of US antitrust enforcers are squarely on the artificial intelligence industry.
Last week, the US Federal Trade Commission announced it was opening an inquiry into multibillion-dollar investments by tech giants into smaller AI startups. Amazon, Google, and Microsoft made investments in Anthropic and OpenAI, and while they didn’t buy them outright, that has not stopped federal antitrust regulators from flexing some muscle.
Microsoft poured $13 billion into OpenAI, the company that ushered in the start of the AI boom with the release of its chatbot ChatGPT in November 2022, and the FTC is also probing two separate investments into Anthropic, which makes the AI-powered chatbot Claude, by Amazon ($4 billion) and Google ($2 billion).
It’s possible that in a more hands-off regulatory environment, these Silicon Valley stalwarts would have simply bought the pure-play startups outright. But doing so these days would enlarge the targets already on their chests.
The US government’s commitment to busting corporate dealmaking in the internet sector has been spotty over the past two decades. The historical rate at which the government challenges mergers is “far, far lower in the digital sector,” says Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. This is research she oversaw and testified about to Congress in her previous role as the president of the American Antitrust Institute.
Federal antitrust enforcement is now led by FTC chair Lina Khan, a longtime critic of Big Tech dating back to her days as a student at Yale Law School, and the Department of Justice’s antitrust chief Jonathan Kanter, who spent his final years in private practice in part representing smaller tech firms in lawsuits against Apple and Google. In the first few years of their tag-team tenure, Khan and Kanter have sued Google for abusing its monopoly in advertising, sued Amazon for anticompetitive behavior in the online retail market, and unsuccessfully sued Meta to block its acquisition of the VR firm Within. Khan scored a big win in December when a federal court upheld the agency’s decision to block a $7.1 billion biotech merger, and several tech companies including Adobe and Figma have terminated merger plans after meeting with antitrust regulators. Still, it could take years for Khan and Kanter to notch their first major victory over Big Tech.
In a recent speech at Stanford University, Khan said the government wouldn’t turn a blind eye to anti-competitive dealmaking in the AI space, noting that the FTC “will be clear-eyed in ensuring that claims of innovation are not used as cover for lawbreaking.”
Brian Albrecht, chief economist for the International Center for Law & Economics, said there’s no question that Khan “believes there was too little scrutiny on previous tech acquisitions and wants to get ahead.” He says she’s been overeager with a “desire to bring any tech case, instead of good cases” (such as the Meta-Within case). Still, while the FTC hasn’t yet brought a case against these AI investments, Albrecht says it “has a flavor of ‘we need to do something, and this is something.’”
The FTC inquiry is just that — merely an inquiry. The agency hasn’t yet launched a formal investigation into any of these deals, which would be a necessary step before it decides whether to bring lawsuits. In fact, recent reports indicate that the FTC and DOJ both want to investigate Microsoft’s stake in OpenAI but can’t agree over who’ll get to do it.
But it’s a warning shot, a declaration of intent, a resolution that the investment-not-acquisition strategy — if that’s the strategy after all — will not go unnoticed.
Investments, not acquisitions
Antitrust regulators have broad authority over partial-ownership investments, not just full-on corporate takeovers. That’s important, Moss says, because her research shows that the percentage of investments in AI over the past three decades is about three times higher than that of acquisitions involving AI. “That tells you a lot about how companies are approaching AI,” she says.
Microsoft’s arrangement with OpenAI is somewhat stranger than the others because while it’s invested an astronomical sum in the ChatGPT maker, OpenAI is technically run by a nonprofit. Until recently, Microsoft didn’t even occupy a seat on that nonprofit’s board! But when the board dismissed OpenAI CEO Sam Altman in November, Microsoft’s power was hard to ignore. Microsoft promised to hire all of the 700 employees threatening to leave OpenAI over the ouster, successfully lobbied for Altman’s reinstallation, and won a (nonvoting) board seat in the aftermath.
“The arrangement does not get some sort of special immunity because it isn't a standard investment,” Albrecht says. “That being said, investments, joint ventures, strategic partnerships have often (and should) received more leniency from the agencies.”
And even though OpenAI is run by a nonprofit, that doesn’t obviate the need for antitrust enforcement. “The exercise of market power affects prices, quality, and innovation similarly in the case of for-profit and nonprofit organizations,” Moss says, noting that many universities and hospitals have nonprofit status and have received antitrust scrutiny.
The UK’s Competition and Markets Authority is already investigating Microsoft’s investment in OpenAI, and Microsoft has defended itself by pointing to the odd nature of its investment. Instead of buying equity in OpenAI, Microsoft receives half of the startup’s revenues until the $13 billion investment is repaid, according to the Los Angeles Times.
A new era for antitrust
In the past few decades, Silicon Valley technology companies have become the most valuable firms in the world. Seven of the top nine most valuable firms in the world are tech companies with AI investments (Amazon, Apple, Google, Meta, Microsoft) or chip manufacturers (NVIDIA and TSMC), all of which have massive direct or indirect interests in the success of AI.
Many critics of these Big Tech firms say they have grown bloated and unruly without proper antitrust enforcement to keep them from gobbling up competitors. That seems to be the view of Khan and Kanter, too — plus, many overseas antitrust regulators who could make life uncomfortable for any of these global companies.
And these companies know that.
It’s hard to know whether in another time, facing different scrutiny, Microsoft might have tried to buy OpenAI. Or if Amazon or Google would’ve made an offer to buy Anthropic.
“The current state is that any Big Tech company has to worry about the FTC for any major investment or business decision they make,” Albrecht says. “That makes investments relatively more attractive than acquisitions.”
But this inquiry might reveal that the gap, he says, isn’t as big as the companies in question — some of the biggest AI firms in the world — might wish.
Ian Bremmer: On AI regulation, governments must step up to protect our social fabric
Seven leading AI companies, including Google, Meta and Microsoft, committed to managing risks posed by the technology, after holding discussions with the US government last May—a landmark move that Ian Bremmer sees as a win.
Speaking in a GZERO Global Stage discussion from the 2024 World Economic Forum in Davos, Switzerland, Eurasia Group and GZERO Media President Ian Bremmer calls tech firms' ongoing conversations with regulators on AI guardrails a "win" but points out that a big challenge with regulation will be that there is no one-size-fits-all strategy, as AI impacts different sectors differently. For example, ensuring AI can’t be used to make a weapon is important, “but I want to test these things on societies and on children before we roll them out,” he says.
“We would've benefited from that with social media,” he added.
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How is the world tackling AI, Davos' hottest topic?
- The geopolitics of AI ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa ›
- Ian Bremmer: How AI may destroy democracy ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- EU AI regulation efforts hit a snag ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›