Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What is open-source AI anyway?
A key artificial intelligence industry body has released a long-awaited definition that could affect how different AI models are viewed — if not regulated. The Open Source Initiative, a public benefit corporation, sets standards for what constitutes open-source systems in the technology industry. Over the past year, the group has investigated a big question: What constitutes open-source AI?
Meta has been one of the leading voices on open-source AI development with its LLaMA suite of large language models. But some critics have argued it isn’t truly open-source because it has licensing rules about how third-party developers can use its models and isn’t fully transparent about its training data.
Now, according to the new definition, an AI system must meet these four requirements to be considered open-source:
- It can be used by any person and without having to ask for permission
- Outsiders need to be able to study how the system works and inspect its components
- Developers need to be able to modify the system
- Users need to be able to share the system with others with or without modifications — for any purpose
Meta took issue with the new definition, maintaining that its models are, in fact, open-source. “There is no single open-source AI definition, and defining it is a challenge because previous open-source definitions do not encompass the complexities of today’s rapidly advancing AI models,” a company spokesperson told TechCrunch.
Still, the definition could help regulators and international organizations differentiate between open- and closed-source (or proprietary) models. That’s important. Recently, California lawmakers got pushback for advancing a bill requiring AI developers to have a “kill switch” for shutting down their models — something critics called a “de facto ban on open-source development.” (The bill was ultimately vetoed by Gov. Gavin Newsom.)
A Chinese autonomous vehicle firm is going public in the US
On Oct. 17, a Chinese autonomous vehicle company called Pony AI filed to go public in the United States through an initial public offering. The company is the latest Chinese firm to seek entry into the US public markets after Beijing eased its restrictions on its domestic private sector seeking foreign investment and listing on US exchanges. The Chinese electric vehicle startup Zeekr began trading on the New York Stock Exchange in May.
Pony AI, which makes robotaxis, has ties to both China and Silicon Valley, but it’s also backed by the Japanese automaker Toyota and Saudi Arabia’s NEOM Investment Fund. China’s securities regulator approved Pony AI to list on either the Nasdaq or the NYSE in April.
The US and China are currently feuding over artificial intelligence, each vying to become the global leader in the technology and gain a strategic edge — but that battle, which largely focuses on chips and tech infrastructure, is unlikely to affect this deal. The US Securities and Exchange Commission has previously pushed for tougher rules about Chinese companies going public on US stock exchanges, but that’s largely affected those going public through shell companies — a popular workaround to Chinese restrictions — rather than through traditional IPOs.Will AI help or hurt Africa?
AI technology might be able to help poorer nations “leapfrog” entire development phases, the Financial Times wrote this week — just like how some nations skipped mass landline adoption and went straight to mobile phones in the last two decades.
AI startups are popping up across Africa, trying to tackle problems in health care, education, and language and dialect differences. And foreign firms are starting to invest too: Microsoft and the UAE-based fund G42 announced a $1 billion investment in Kenya to build data centers, develop local-language AI models, and offer skills training to people in the country. Amazon has said it’s investing $1.7 billion in Amazon Web Services cloud infrastructure across the continent. For its part, Google has begun developing African-language AI models and given $6,000 microgrants to Nigerian AI startups.
But there’s also concern that AI could deepen existing digital divides — especially if popular large language models aren’t developed with Africa in mind, don’t support local languages, or if the continent lacks the infrastructure to run high-powered models efficiently.
China spends big on AI
Much of China’s AI industry is reliant on low-grade chips from US chipmaker Nvidia, which is barred from selling its top models because of US export controls. (For more on the US-China chip race, check out GZERO AI’s interview with Trump export control chief Nazak Nikakhtar from last week’s edition.)
What do Democrats want for AI?
At last week’s Democratic National Convention, the Democratic Party and its newly minted presidential candidate, Vice President Kamala Harris, made little reference to technology policy or artificial intelligence. But the party’s platform and a few key mentions at the DNC show how a Harris administration would handle AI.
In the official party platform, there are three mentions of AI: First, it says Democrats will support historic federal investments in research and development, break “new frontiers of science,” and create jobs in artificial intelligence among other sectors. It also says it will invest in “technology and forces that meet the threats of the future,” including artificial intelligence and unmanned systems.
Lastly, the Dems’ platform calls for regulation to bridge “the gap between the pace of innovation and the development of rules of the road governing the most consequential domains of technology.”
“Democrats will avoid a race to the bottom, where countries hostile to democratic values shape our future,” it notes.
Harris echoed that final point in her DNC keynote address. “I will make sure that we lead the world into the future on space and artificial intelligence,” she said. “That America, not China, wins the competition for the 21st century, and that we strengthen, not abdicate our global leadership.”
The Republican Party platform, by contrast, promises to repeal Biden’s 2023 executive order on AI, calling it “dangerous,” hindering innovation, and imposing “radical left-wing ideas” on the technology. “In its place, Republicans support AI development rooted in free speech and human flourishing,” it says. (The platform doesn’t go into specifics about how the executive order is harmful or what a free speech-oriented AI policy would entail.) In his RNC address, Donald Trump didn’t mention artificial intelligence or tech policy but talked at length about beating back China economically.
GZERO asked Don Beyer, the Virginia Democratic congressman going back to school to study artificial intelligence, what he thought of his party’s platform and Harris’ remarks on AI. Beyer said that Harris has struck the right balance between promoting American competitiveness and outlining guardrails to minimize the technology’s risks. “The vice president has been personally involved in many of the administration’s efforts to ensure American leadership in AI, from establishing the US AI Safety Institute to launching new philanthropic initiatives for public interest AI, and I expect her future administration to continue that leadership,” he said.Telegram’s billionaire CEO arrested in France
Pavel Durov, the 39-year-old founder and CEO of social media network Telegram, was arrested at Bourget Airport near Paris on Sunday, following an investigation by French authorities into the platform’s lack of moderation. Officials claim Telegram has allowed fraud, terrorism, drug trafficking, cyberbullying, and organized crime to flourish on the app. Telegram also came under scrutiny in the UK earlier this month for hosting far-right channels that mobilized violent protests in English cities.
Telegram’s encrypted app has nearlyone billion users and is popular in Russia, Ukraine, and former Soviet republics. After Russia invaded Ukraine in 2022, Telegram became “a virtual battlefield” used by both Ukrainian President Volodymyr Zelensky and Russian officials.
On Sunday, the deputy speaker of the state Duma, Vladislav Davankov,claimed that, “The arrest of [Durov] could have political motives and be a means of obtaining the personal data of Telegram users.” The channel is accused of spreading disinformation and is also used by the Russian military for recruitment and coordination.
Moscow is demanding consular access to the Russia-born CEO, who is now a dual citizen of France and Dubai. Also weighing in is X CEO Elon Muskwho posted, “POV: It’s 2030 in Europe and you’re being executed for liking a meme.” Whether the backlash helps win Durov his freedom at his upcomingcourt appearance — and whether Telegram will retain its users’ trust — remains an open question.Video game’s voices want to be heard
The strike, which began on July 26 after a year-and-a-half of negotiations, halted member performances for 10 major studios — Activision Blizzard, Blindlight, Disney, Electronic Arts, Formosa Interactive, Insomniac Games, Llama Productions, Take 2 Productions, VoiceWorks, and WB Games.
The strike demands are similar to what the union asked of film studios in its strike last year: not only higher wages but also protections against the use of artificial intelligence. A deal struck with the film studios late last year allowed the use of AI to produce “digital replicas” of its members — as long as they were properly compensated. Their union didn’t halt AI, they just got their members paid, a result that’ll surely be in the back of negotiators’ minds amid the video game strike.Google Search is making things up
Google has defended its new feature, saying that these strange answers are isolated incidents. “The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web,” the tech giant told the BBC. The Verge reported that Google is manually removing embarrassing search results after users post what they find on social media.
This is Google’s second major faux pas in its quest to bring AI to the masses. In February, after it released its Gemini AI system, its image generator kept over-indexing for diverse images of individuals — even when doing so was wildly inappropriate. It spit out Black and Asian Nazi soldiers and Native Americans dressed in Viking garb.
The fact that Google is willing to introduce AI into its cash cow of a search engine signals it is serious about integrating the technology into everything it does. It’s even decided to introduce advertising into these AI Overviews. But the company is quickly finding out that when AI systems hallucinate, not only can that spread misinformation — but it can also make your product a public laughingstock.