We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
OpenAI’s little new model
OpenAI is going mini. On July 18, the company behind ChatGPT announced GPT-4o mini, its latest model. It’s meant to be a cheaper, faster, and less energy intensive version of the technology. The smaller model is marketed to developers who rely on OpenAI’s language models and want to save money.
The move also comes as AI companies are trying to cut their own costs, reduce their energy dependence, and answer calls from critics and regulators to lower their energy burden. Training and running AI often requires access to electricity-guzzling data centers, which in turn require copious amounts of water to keep them from overheating.
Moving forward, look for AI companies to offer a multitude of options to cost-conscious and energy-conscious users.
To see where data centers have cropped up in North America, check out our latest Graphic Truth here.A visitor is walking past an AI sign at the World Artificial Intelligence Conference at the Shanghai World Expo Exhibition Center in Shanghai, China, on July 6, 2024.
OpenAI blocks access in China
On Tuesday, OpenAI blocked API access to its ChatGPT large language model in China, meaning developers can no longer tap into OpenAI’s tech to build their own tools. While the company didn’t offer a specific reason for the move, an OpenAI spokesperson told Bloomberg last month that it would start cracking down on API users in countries where ChatGPT was not supported. China has long blocked access to the app, but developers were able to use the API as a backdoor to access the toolbox. Not anymore.
Washington has focused heavily on denying Beijing any advantage in the AI space, especially through strict export controls on chips. There’s no government action forcing OpenAI’s hand on either side of the Pacific, but the decision was likely prophylactic.
As much as Chinese companies that relied on API access may be smarting now, the cutoff does open opportunities for domestic firms to try to win over the newly homeless users. We’re watching for companies like SenseTime, Zhipu AI, or Baidu’s Ernie AI to make their pitch as substitutes.
Sam Altman, President of Y Combinator, speaks at the Wall Street Journal Digital Conference in Laguna Beach, California, U.S., October 18, 2017.
Oh BTW, OpenAI got hacked and didn’t tell us
A hacker breached an OpenAI employee forum in 2023 and gained access to internal secrets, according to a New York Times report published Thursday. The company, which makes ChatGPT, told employees but never went public with the disclosure. Employees voiced concerns that OpenAI wasn’t taking enough precautions to safeguard sensitive data — and if this hacker, a private individual, could breach their systems, then so could foreign adversaries like China.
Artificial intelligence companies have treasure troves of data — some more sensitive than others. They collect training data (the inputs on which models learn) and user data (how individuals interact with applications), but also have trade secrets that they want to keep away from hackers, rival companies, and foreign governments seeking their own competitive advantage.
The US is trying hard to limit access to this valuable data, as well as the chip technology that powers training, to friendly countries, and has enacted export controls against China. If lax security at private companies means Beijing can just pilfer the data it needs, Washington will need to modify its approach.
OpenAI logo displayed on a phone screen and binary code displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
Publishers are cashing in on AI
OpenAI is striking lucrative deals with major news publishers to license their content for their AI models. On May 29, OpenAI announced that it struck deals with The Atlantic and Vox Media one week after it inked a similar agreement with Wall Street Journal publisher News Corp. While the News Corp. deal was reportedly worth $250 million over five years, figures for the new deals haven’t yet been reported.
There’s currently a split in the news world over how to deal with tech companies that want news content to train their models and deliver reliable, up-to-date information for users. The New York Times has taken a different approach and is suing OpenAI for copyright violations, as have several newspapers owned by Alden Global Capital.
Journalists have also pushed back against coziness with AI firms. Jessica Lessin, founder of the news site The Information, wrote an op-ed in The Atlantic just days before the deal was announced, saying that news companies are making a “huge mistake” by “absolving AI companies of theft.”
The Atlantic’s tech editor, Damon Beres, called it a “devil’s bargain” in writing about his employer. “Generative AI has not exactly felt like a friend to the news industry, given that it is trained on loads of material without permission from those who made it in the first place,” Beres wrote, likening these deals to the ones made with Facebook and other social media companies a decade ago — only for them to change their mind and pivot away from news.
Vox’s Editorial Director Bryan Walsh also wrote critically about his employer: “I’ve seen our industry pin our hopes on search engine optimization; on the pivot to video (and back again); on Facebook and social media traffic,” he wrote. “But sure — maybe this time Lucy won’t pull the football away.”An image of OpenAI CEO Sam Altman is seen on a mobile device screen in this illustration.
OpenAI announces next model and new safety committee
OpenAI announced that it is training a new generative AI model to eventually replace GPT-4, the industry-standard model that powers ChatGPT and Microsoft Copilot.
But the OpenAI board of directors also said that it’s forming a new Safety and Security Committee to advise it on the risks posed by powerful AI. After the previous board of directors abruptly fired CEO Sam Altman for not being candid with them in November 2023, OpenAI staffers and lead investor Microsoft pressured the board to rehire Altman. It worked: Altman rejoined the company, and most of the old board members resigned.
OpenAI has sought to be an industry leader in generative AI while staying in the good graces of regulators looking to rein in its ambitions. OpenAI took the Biden administration’s voluntary pledge to mitigate AI risks in July 2023, and Altman recently joined the Department of Homeland Security’s new Artificial Intelligence Safety and Security Board.
The US has done little to curb the ambitions of its most prominent AI firms, but that good grace is dependent on the appearance of being a reliable and trustworthy actor — one that will propel Silicon Valley ahead of other global tech hubs while building AI that can help humanity, not harm it.
People walk behind the logo of SoftBank Corp in Tokyo.
Hard Numbers: SoftBank’s hardy investment, Grok gets cash infusion, Humane’s rescue plan, Kenya’s tech upgrade, News Corp and OpenAI strike a deal
6 billion:Elon Musk’s AI startup, xAI, has raised $6 billion from venture capital investors such as Andreessen Horowitz and Sequoia Capital, plus Saudi Arabia’s Prince Alwaleed bin Talal and Kingdom Holding Company. The new funding round boosts the value of xAI, which makes the AI chatbot Grok, to $24 billion. Musk is a cofounder of OpenAI but severed ties with the firm in 2018 and has since sued the ChatGPT maker, alleging it abandoned its founding principles.
750 million: Humane, the company that recently released an AI-powered pin to scathing reviews, is reportedly looking for a buyer to swoop in. While customers have to cough up $699 for the signature pin, a corporate buyer would need to pay between $750 million and $1 billion — if the company’s current management fetches any interest, that is.
1 billion: Microsoft and the UAE-based tech giant G42 are pouring $1 billion into a geothermal-powered data center in Kenya. This East African investment is the first big announcement since Microsoft invested $1.5 billion in G42 in April, a deal brokered by the Biden administration. Microsoft and G42 also pledged to work on local language and skills training initiatives with the Kenyan government and companies in the country.
250 million: OpenAI struck a licensing deal with News Corp., the parent company of The Wall Street Journal, reportedly worth $250 million over five years. News Corp’s stock rose on the announcement, and the deal represents a burgeoning revenue stream for news companies. But the deal isn’t without critics: The Information’s founder Jessica Lessin wrote that publishers like News Corp need to know their worth with AI companies, hungry for content, and not rush into any deal for “relative pennies.”
A smartphone is displaying GPT-4o with the OpenAI logo visible in the background in this photo illustration, taken in Brussels, Belgium, on May 13, 2024.
Google and OpenAI’s competition heats up
Both Google and OpenAI held big AI-focused events last week to remind the world why they should each be leaders in artificial intelligence.
Google’s announcement was wide-ranging. At its I/O developer conference, the company basically said that it’ll infuse AI into all of its products — yes, even its namesake search engine. If you’ve Googled anything lately, you might have noticed that Gemini, Google’s large language model, has started popping up and suggesting the answers to your questions. Google smells the threat of competition not only from ChatGPT and other chatbots that can serve as your personal assistant but also from AI-powered search engines like Perplexity, which we tested in February. It also announced Veo, a generative video model like OpenAI’s Sora, and Project Astra, a voice-assisted agent.
Meanwhile, OpenAI had a much more focused announcement. The ChatGPT maker said it’s rolling out a new version of its large language model, GPT-4o, and powering its ChatGPT app with it. The new model will act more like a voice-powered assistant than a chatbot — perhaps obviating the need for Alexa or Siri in the process if it’s successful. That said, how often are you using Alexa and Siri these days?
The future of AI, the company thinks, is multimodal—meaning models can process text, images, video, and sound quickly and seamlessly and spit out answers back at the users.
Most importantly, OpenAI said that this new ChatGPT app (on smartphones and desktops) will be free of charge — meaning millions of people who aren’t used to paying for ChatGPT’s premium service will now have access to its top model — though rate limits will apply. Maybe OpenAI realizes it needs to hook users on its products before the AI hype wave recedes — or Google leapfrogs into the consumer niche.
Chuck Schumer’s light-touch plan for AI
Over the past year, Senate Majority Leader Chuck Schumer (D-NY) has led the so-called AI Gang, a group of senators eager to study the effects of artificial intelligence on society and curb the threats it poses through regulation. But calling this group a gang implies a certain level of toughness that was nowhere to be found in the roadmap it unveiled on May 15.
Announcing the 31-page roadmap, a bipartisan set of policy priorities for Congress, Schumer bragged of “months of discussion,” “hundreds of meetings,” and “nine first-of-their-kind AI Insight Forums,” including sessions with OpenAI’s Sam Altman and Meta’s Mark Zuckerberg.
What he delivered, however, was more of a spending plan than a vision for real regulation – the policy proposals were limited, and the approach was hands-off. The roadmap called for $32 billion over the next three years for artificial intelligence-related spending for research and innovation. It offered suggestions, such as a federal data privacy law, legislation to curb deepfakes in elections, and a ban on “social scoring” like the social credit system that China has tested.
Civil society groups aren’t pleased
The long list of proposals is “no substitute for enforceable law – and these companies certainly know the difference, especially when the window to see anything into legislation is swiftly closing,” the AI Now Institute’s Amba Kak and Sarah Myers West wrote in a statement. Maya Wiley, CEO of the Leadership Conference on Civil and Human Rights, wrote that “the framework’s focus on promoting innovation and industry overshadows the real-world harms that could result from AI systems.”
Ronan Murphy of the Center for European Policy Analysis wrote that the gap between the US and EU approaches to AI could not be more stark. “US lawmakers believe it is premature to restrain fast-moving AI innovation,” he wrote. “In contrast, the EU’s AI Act bans facial recognition applications and tools that exhibit racial or other discrimination.”
Former White House technology advisor Suresh Venkatasubramaniantweeted that the proposal felt so unoriginal and recycled that it might have been written by ChatGPT.
An AI law is unlikely this year
Adam Conner, vice president of tech policy at the Center for American Progress, said that while the roadmap has some areas of substance, such as urging a federal data privacy law, “most sections are light on details.” He called the $32 billion spending proposal a “detailed wish list” for upcoming funding bills.
It was a thin result for something that took so long to cook up, he said, and “leaves little time on the calendar this year for substantive AI legislation, except for the funding bills Congress must pass this year and possibly the recently introduced bipartisan bicameral American Privacy Rights Act data privacy bill.” This means any other AI legislation will likely have to wait until next year. “Whether that was the plan all along is an open question,” Conner added.
Danny Hague, assistant director of Georgetown University’s Center for Security and Emerging Technology, agreed that it’s unlikely anything comprehensive gets passed this year. But he doesn’t necessarily see the report as a sign that the US will be hands-off with legislation. He said the Senate Working Group likely realizes that “time is limited,” and there are already “structures in place — regulatory agencies and the congressional committees that oversee them — to act on AI quickly.”
Jon Lieber, managing director for the United States for Eurasia Group, said he didn’t understand why an AI Gang was necessary at all. “I’m confused why Schumer felt the need to do something here,” he said. “This process should have been handled by a senate committee, not the leaders office.
Such a soft line from Congress means that until further notice, President Joe Biden — who has issued an executive order, export controls, and CHIPS Act funding to create jobs, secure tech infrastructure, and directed his agencies to get up to speed on AI — might just be the AI regulator in chief.