We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's evolving role in society
In a world where humanity put a man on the moon before adding wheels to luggage, the rapid advancements in AI seem almost paradoxical. Microsoft’s chief data scientist Juan Lavista, in a recent Global Stage conversation with Tony Maciulis, highlighted this contrast to emphasize how swiftly AI has evolved, particularly in the last few years.
Lavista discussed the impact of generative AI, which allows users to create hyper-realistic images, videos, and audio. This capability is both impressive and concerning, as demonstrated in their “Real or Not?” Quiz, where even experts struggle to distinguish between AI-generated and real images.
While AI offers incredible tools for good, Lavista warns of the potential risks, particularly with deepfakes and other deceptive technologies. He stresses the importance of public education and the need for AI models to be trained on diverse data to avoid biases.
As AI continues to evolve, its impact on daily life will only grow. Lavista predicts more accurate and less error-prone models in the future, underscoring the need to balance innovation with responsible use.
Please cough for the AI
What if an artificial intelligence stored on your phone could listen and hear how sick you are? Google is training a bioacoustic AI model called Health Acoustic Representations with 300 million snippets of audio collected from around the world — of people sneezing, coughing, and breathing. The goal? To spot tuberculosis early and treat it.
A whopping 1.3 million people died of tuberculosis in 2022 alone, according to the World Health Organization, and 10.6 million fell ill with the disease. “TB is a treatable disease, but every year millions of cases go undiagnosed — often because people don’t have convenient access to healthcare services,” Google’s Shravya Shettywrote in a blog post. “Improving diagnosis is critical to eradicating TB, and AI can play an important role in improving detection and helping make care more accessible and affordable for people around the world.”
Google is focused first on preventing tuberculosis in India and is partnering with an Indian company called Salcit Technologies, whose own AI app Swaasa is being used by healthcare providers on the subcontinent. Swaasa will integrate Google’s model to improve its own detection of the disease.
Nvidia’s high-flying earnings aren’t good enough
Nvidia’s earnings reports have become a cultural phenomenon, with super-fan investors even throwing watch parties to tune into how high-flying the chip maker’s marks will be each quarter.
Last week, Nvidia, whose chips have fueled the current AI boom, reported more than $30 billion in sales in the second quarter of its fiscal year, up 122% from the same quarter last year. But even though it beat Wall Street analyst predictions, the stock sagged 7% after the report.
Nvidia’s stock is up 147% since the start of 2024, leading the hottest part of the stock market. While some wonder if AI is little more than a speculative bubble with AI stocks soaring, Nvidia is now the third-most-valuable company globally, and the biggest question each quarter is: How good is good enough for investors?OpenAI’s getting richer
OpenAI is in talks for a new funding round that could value the company over $100 billion. That would cement it as the fourth-most-valuable privately held company in the world, only behind ByteDance ($220 billion), Ant Group ($150 billion), and SpaceX ($125 billion).
Thrive Capital is leading the venture round, but Microsoft is expected to add to its existing $13 billion stake in the company. Apple and Nvidia, are also discussing investing in the ChatGPT maker. Nvidia supplies chips that OpenAI uses to train and run its models while Apple is integrating ChatGPT in its forthcoming Apple Intelligence system that'll feature on new iPhones.
OpenAI was last valued at around $80 billion in 2023 following a funding round that allowed employees to sell their existing shares. It’s unclear whether the company is currently considering an initial public offering, but if it needs tons of capital for the very costly process of developing increasingly powerful AI models, that might be a necessary step in the not-so-distant future.The Feds vs. California: Inside the twin efforts to regulate AI in the US
Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.
The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.
But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altmanwrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.
Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”
But Dean Ball, a research fellow at the libertarian think tank Mercatus Center, said he’s concerned about the opacity of these arrangements. “We do not know what level of access the federal government is being given, whether the federal government has the ability to request that model releases be delayed, and many other specific details,” Ball said. “This is not the way lawmaking is supposed to work in America; having private arrangements worked out between providers of transformative technology and the federal government is a troubling step in AI policy.”
Still, these appear to be relatively light-touch measures that counter California’s proposed approach to regulating artificial intelligence.
On Aug. 28, the state’s legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, which aims to establish “common sense safety standards” for powerful AI models. Written by California State Sen. Scott Wiener, and supported by AI pioneers like Geoffrey Hinton and Yoshua Bengio, the bill has divided Silicon Valley companies. Albrecht said that what’s been proposed by California is much closer to the European model of AI regulation — the EU’s AI Act passed in March — while Washington hasn’t yet adopted a unified view on how the technology should be regulated.
Critics of the bill include OpenAI, California’s Chamber of Commerce, and even former Speaker of the House Nancy Pelosi. “While we want California to lead in AI in a way that protects consumers, data, intellectual property, and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said in a recent statement. In a recent edition of GZERO AI, experts from the Electronic Frontier Foundation at the Atlantic Council expressed concerns about the bill’s so-called “kill switch” and how it could stifle open-source AI development.
Some industry players have been more open to the bill. Anthropic said the bill’s benefits likely outweigh its risks and Tesla CEO Elon Musk, who has an AI startup of his own called xAI, said California should “probably” pass the bill.
It’s still unclear whether Gov. Gavin Newsom will sign the bill — he has until Sept. 30 to do so. He has not signaled his view on the legislation, but in May, he warned about the risk of overregulating AI.
“I don’t want to cede this space to other states or other countries,” Newsom said at an event in San Francisco. “If we overregulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”
China spends big on AI
Much of China’s AI industry is reliant on low-grade chips from US chipmaker Nvidia, which is barred from selling its top models because of US export controls. (For more on the US-China chip race, check out GZERO AI’s interview with Trump export control chief Nazak Nikakhtar from last week’s edition.)
The AI who lost an election
A librarian ran for mayor of Cheyenne, Wyoming, with a simple promise: Victor Miller would simply be the human vessel for an artificial intelligence that would run the city. He’d be a “humble meat avatar” for the Virtual Integrated Citizen, or VIC, that would make decisions and run the government if elected.
The stunt made national headlines, but voters weren’t enthused. They soundly rejected VIC and its human creator. On Aug. 20, Miller and VIC only received 327 votes out of the 11,036 cast. He placed fourth out of six candidates in the primary, with the top two vote-getters (including the incumbent mayor) advancing to the November election.
Miller’s challenge faced setbacks throughout the process. The Wyoming Secretary of State previously expressed “significant concerns” about VIC appearing — without Miller — on the state ballot, saying there needed to be real human names on the ballot. Then, OpenAI shut down access to VIC, saying it violated rules against political campaigning. (Miller later relaunched the service through OpenAI’s GPT-4 without punishment.)
After conceding, Miller announced he’s forming a new group called the Rational Governance Alliance, which seeks to expand AI decision making to promote “efficient, transparent, and unbiased” governance. So, maybe we can look for RGA candidates, or at least their human stewards, on future ballots.How open is open-source AI?
Open-source AI developers say that their models can spark greater future innovation, but there’s intra-industry squabbling about what truly constitutes an open-source model.
Now, there’s a new definition, courtesy of the nonprofit Open Source Initiative, which released the new guidelines on Aug. 22.
The new definition requires that a model’s source code, parameters, and weights (technical details of the models) need to be freely available to the public, along with general information about training data so individuals can recreate a similar model. (It doesn’t mandate a full release of a model’s precise dataset.)
There’s been an ongoing squabble in tech circles as to what’s a proprietary model and what’s truly open-source. While OpenAI’s GPT models and Anthropic’s Claude models are clearly proprietary, some companies such as Meta with its Llama series have branded themselves as open-source. Critics have suggested that Llama isn’t truly open-source because of Meta’s license restrictions dictating how third-party developers can use its models and because it doesn’t disclose its training data. On that latter point, in particular, it appears Llama would fall short of the OSI definition of open-source.
Perhaps, a more precise definition of open-source can hold industry players accountable to their marketing promises, urge design that’s more favorable to third-party developers hungry to innovate, and boost transparency in a set of technologies where black boxes are the norm.