Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Sutskever’s easy billion, OpenAI gets expensive, Getting AI out of the immigration system, Voice actors strike a deal
1 billion: OpenAI cofounder Ilya Sutskever has raised $1 billion for his new AI startup Safe Superintelligence, which has promised to deliver a highly advanced AI model without the distraction of short- or medium-term product launches. The company only has 10 employees so far, but it has already raised that sum from eager investors, including Andreessen Horowitz and Sequoia Capital.
2,000: OpenAI is reportedly considering a $2,000 per month subscription for its forthcoming large language models, Strawberry and Orion. Its current top model, GPT-4, is free for limited usage and $20 per month for increased usage and extra features. It’s still unclear what the new models will cost when they’re released later this fall — or, if they’re costly, whether consumers will be willing to spend that much.
141: A group of 141 organizations, including the Electronic Frontier Foundation, sent a letter to the Department of Homeland Security urging it to stop using AI tools in the immigration system and to comply with federal rules around protecting civil rights and avoiding algorithmic errors. The groups requested transparency around how the department uses AI to make immigration and asylum decisions, as well as biometric surveillance of migrants at the border.
80: Voice actors reached an agreement with the producers of 80 video games last week, after striking for two months. SAG-AFTRA, the actor’s union, won new protections against “exploitative uses” of AI. That said, it’s still striking against most of the larger video game studios, including Electronic Arts, as well as Walt Disney and Warner Bros.’s game studios.OpenAI’s getting richer
OpenAI is in talks for a new funding round that could value the company over $100 billion. That would cement it as the fourth-most-valuable privately held company in the world, only behind ByteDance ($220 billion), Ant Group ($150 billion), and SpaceX ($125 billion).
Thrive Capital is leading the venture round, but Microsoft is expected to add to its existing $13 billion stake in the company. Apple and Nvidia, are also discussing investing in the ChatGPT maker. Nvidia supplies chips that OpenAI uses to train and run its models while Apple is integrating ChatGPT in its forthcoming Apple Intelligence system that'll feature on new iPhones.
OpenAI was last valued at around $80 billion in 2023 following a funding round that allowed employees to sell their existing shares. It’s unclear whether the company is currently considering an initial public offering, but if it needs tons of capital for the very costly process of developing increasingly powerful AI models, that might be a necessary step in the not-so-distant future.Elon Musk refiles his OpenAI lawsuit
Billionaire Elon Musk is reviving a lawsuit in California federal court against OpenAI, the company he co-founded, and its CEO, Sam Altman. The lawsuit accuses OpenAI of fraud and breach of contract, among other allegations. The lawsuit casts Musk, one of the world’s richest people, as a victim of a complex scam whereby he agreed to donate $44 million of his own money, after which OpenAI, he claims, violated its non-profit mission. Musk left OpenAI in 2018 after attempting to take over the company.
In June, Musk withdrew this suit against Altman for unknown reasons, but the new filing includes federal racketeering allegations against Altman and co-founder Greg Brockman. OpenAI said that Musk has understood OpenAI’s mission and direction from the beginning, and that his donation was not coerced.
Musk now runs xAI, a company he hopes will rival OpenAI, and has AI interests with his automotive company Tesla. So, some may question whether Musk truly feels wronged or just wants to stick it to his former colleagues.
Are Microsoft and OpenAI friends or foes?
The move comes amid two notable currents: First, OpenAI recently announced a search engine product called SearchGPT, though it’s still a prototype. That product genuinely could compete with the Bing search engine. But more importantly, antitrust regulators are sniffing around the relationship between the two companies, looking for anticompetitive behavior. Both the United Kingdom’s Competition and Markets Authority and the US Federal Trade Commission are investigating the two companies — so much that Microsoft recently ditched its OpenAI board seat.
So, are the two AI giants friends or foes? Well, it’s complicated.
What Sam Altman wants from Washington
Altman’s argument is not new, but his policy prescriptions are more detailed than before. In addition to the general undertone that Washington should trust the AI industry to regulate itself,the OpenAI chief calls for improved cybersecurity measures, investment in infrastructure, and new models for global AI governance. He wants additional security and funding for data centers, for instance, and says doing this will create jobs around the country. He also urges the use of additional export controls and foreign investment rules to keep the AI industry in US control, and outlines potentially global governance structures to oversee the development of AI.
We’ve heard Altman’s call for self-regulation and industry-friendly policies before — he has become something of a chief lobbyist for the AI industry over the past two years. His framing of AI development as a national security imperative echoes a familiar strategy used by emerging tech sectors to garner government support and funding.
Scott Bade, a senior geotechnology analyst at Eurasia Group, says Altman wants to “position the AI sector as a national champion. Every emerging tech sector is doing this:
‘We’re essential to the future of US power [and] competitiveness [and] innovation so therefore [the US government] should subsidize us.’”
Moreover, Altman’s op-ed has notable omissions. AI researcher Melanie Mitchell, a professor at the Santa Fe Institute,points out on X that there’s no mention of the negative effects on the climate, seeing that AI requires immense amounts of electricity. She also highlights a crucial irony in Altman’s insistence to safeguard intellectual property: “He’s worrying about hackers stealing AI training data from AI companies like OpenAI, not about AI companies like OpenAI stealing training data from the people who created it!”
The timing of Altman’s op-ed is also intriguing. It comes as the US political landscape is shifting, with the upcoming presidential election no longer seen as a sure win for Republicans. The race between Kamala Harris and Donald Trump is now considered a toss-up, according to the latest polling since Harris entered the race a week and a half ago. This changing dynamic may explain why Altman is putting forward more concrete policy proposals now rather than counting on a more laissez-faire approach to come into power in January.
Harris is both comfortable with taking on Silicon Valley and advocating for US AI policy on a global stage, as we wrote in last week’s edition. Altman will want to make sure his voice — perhaps the loudest industry voice — gets heard no matter who is elected in November.OpenAI’s little new model
OpenAI is going mini. On July 18, the company behind ChatGPT announced GPT-4o mini, its latest model. It’s meant to be a cheaper, faster, and less energy intensive version of the technology. The smaller model is marketed to developers who rely on OpenAI’s language models and want to save money.
The move also comes as AI companies are trying to cut their own costs, reduce their energy dependence, and answer calls from critics and regulators to lower their energy burden. Training and running AI often requires access to electricity-guzzling data centers, which in turn require copious amounts of water to keep them from overheating.
Moving forward, look for AI companies to offer a multitude of options to cost-conscious and energy-conscious users.
To see where data centers have cropped up in North America, check out our latest Graphic Truth here.OpenAI blocks access in China
On Tuesday, OpenAI blocked API access to its ChatGPT large language model in China, meaning developers can no longer tap into OpenAI’s tech to build their own tools. While the company didn’t offer a specific reason for the move, an OpenAI spokesperson told Bloomberg last month that it would start cracking down on API users in countries where ChatGPT was not supported. China has long blocked access to the app, but developers were able to use the API as a backdoor to access the toolbox. Not anymore.
Washington has focused heavily on denying Beijing any advantage in the AI space, especially through strict export controls on chips. There’s no government action forcing OpenAI’s hand on either side of the Pacific, but the decision was likely prophylactic.
As much as Chinese companies that relied on API access may be smarting now, the cutoff does open opportunities for domestic firms to try to win over the newly homeless users. We’re watching for companies like SenseTime, Zhipu AI, or Baidu’s Ernie AI to make their pitch as substitutes.
Oh BTW, OpenAI got hacked and didn’t tell us
A hacker breached an OpenAI employee forum in 2023 and gained access to internal secrets, according to a New York Times report published Thursday. The company, which makes ChatGPT, told employees but never went public with the disclosure. Employees voiced concerns that OpenAI wasn’t taking enough precautions to safeguard sensitive data — and if this hacker, a private individual, could breach their systems, then so could foreign adversaries like China.
Artificial intelligence companies have treasure troves of data — some more sensitive than others. They collect training data (the inputs on which models learn) and user data (how individuals interact with applications), but also have trade secrets that they want to keep away from hackers, rival companies, and foreign governments seeking their own competitive advantage.
The US is trying hard to limit access to this valuable data, as well as the chip technology that powers training, to friendly countries, and has enacted export controls against China. If lax security at private companies means Beijing can just pilfer the data it needs, Washington will need to modify its approach.