We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Google’s spending spree, Going corporate, Let’s see a movie, Court-ordered AI ban, Energy demands
100 billion: AI is a priority for many of Silicon Valley’s top companies — and it’s a costly one. Google DeepMind chief Demis Hassabis said that the tech giant plans to spend more than $100 billion developing artificial intelligence. That’s the same amount that rival Microsoft is expected to spend in building an AI-powered supercomputer, nicknamed Stargate.
72.5: The free market is dominating the AI game: Of the foundation models released between 2019 and 2023, 72.5% of them originated from private industry, according to a new Staford report. 108 models were released by companies, as opposed to 28 from academia, nine from an industry-academia collaboration, and four from government. None at all were released through a collaboration between government and industry.
5: The A24 film Civil War has garnered considerable controversy for its content, but its promotion is under scrutiny as well. Five posters for the film were created using artificial intelligence and depict scenes that never occur in the narrative. That’s kicked off a debate about the ethics of using AI in film marketing as well as questions of whether this is false advertising for the movie itself.
1,000: A sex offender in the UK who was found to have created 1,000 indecent images of children was banned from using any “AI creating tools” for five years by a British court. It’s not clear if he was actually using AI to create the illegal images in question, or if the order is peremptory, but it could serve as a model for future punishment in UK cases in the future. Meanwhile, on April 23, a group of AI companies including Google, Meta, and OpenAI, pledged to better prevent their tools from creating sexualized images of children and other exploitative material.
4.5: Salesforce is calling on AI companies to disclose the energy efficiency and carbon footprint of their models, and asking legislators to pass new laws aimed at demanding transparency and reducing the total energy consumption of AI. Salesforce’s best estimates put the total power generation demands of global data centers at 1.5% but warn that that figure could increase to 4.5% in the coming years absent intervention.Hard Numbers: Microsoft’s big Gulf investment, Amazon’s ambitions, Mammogram-plus, Adobe pays up, Educating Don Beyer
1.5 billion: Microsoft has announced a deal to invest $1.5 billion in G42, an artificial intelligence firm based in the United Arab Emirates that recently cut ties with Chinese suppliers that had raised US security concerns. Washington and Abu Dhabi relations have been strained over the UAE’s ties to Chinese tech companies. But this deal – which grants Microsoft a minority stake in the company – could signal a new era of relations with the US.
33: Amazon is talking about artificial intelligence – like, a lot. In his recently published annual letter to shareholders, Amazon CEO Andy Jassy mentioned AI 33 times. The company invested $4 billion in Anthropic, which makes the Claude chatbot, and will host Anthropic on Amazon Web Services. Jassy said the company wants to build AI models more so than applications (think GPT-4 instead of ChatGPT) and sell directly to enterprise clients.
40: Clinics are starting to offer an AI-assisted add-on to typical mammograms. Interested patients typically incur an out-of-pocket charge between $40 and $100 to have an AI model scan their breast screening for additional insights — even, possibly, early breast cancer detection.
3: Adobe is planning to compete with OpenAI’s Sora video model. To do so, it’s offering photographers and videographers $3 per minute to upload videos of people doing everyday activities like walking around or sitting down, or simple shots of hands, feet, or eyes to train their new generative AI model. It’s an expensive but cautious approach intended to build up a comprehensive database while staying on the right side of copyright law and avoiding potential imbroglios like the one OpenAI faces for using YouTube videos to train its models
73: Congressman Don Beyer, a Democrat from Virginia, decided he wanted to return to school to learn more about AI. So, that’s what he did. The 73-year-old car dealership mogul-turned-politician recently enrolled in a master’s degree program in machine learning at George Mason University. He’s even learning to code, which he says is helping him better think about all kinds of problems in Washington.Hard Numbers: Pay for Google?, Indonesian investment, Amazon walks out on AI, Scraping YouTube
175 billion: Google said it made $175 billion in revenue from its search engine and related advertising last year, but is it ready to risk the golden goose? The company is reportedly considering charging for premium features on its search engine, including AI-assisted search (its traditional search engine would remain free). We’ve previously tested Perplexity, one of the companies trying to uproot Google’s search dominance with artificial intelligence, and you can read our review here.
200 million: The chipmaker Nvidia is teaming up with Indonesian telecom company Indosat to build a $200 million data center for artificial intelligence in the city of Surakarta, according to Indonesia’s communications minister. This news comes weeks after AI played a central role in the country’s presidential election, and it represents a major investment from one of the world’s richest tech companies in a key emerging market as Indonesia seeks to modernize its economy.
1,000: Amazon’s Just Walk Out in-store AI system for cashier-less grocery store checkout relied heavily on more than 1,000 contractors in India manually checking that the checkout transactions were accurate. Now, Amazon has announced it’s ditching the technology, which was being used in 60 Amazon-branded grocery stores and two Whole Foods stores.
1 million: One OpenAI team reportedly transcribed more than 1 million hours of YouTube videos to train its GPT-4 large language model. The company built a speech recognition tool called Whisper to handle the massive load, a move that may have violated YouTube's terms of use. YouTube parent company Google is a major rival to OpenAI in developing generative AI. Google hasn’t filed suit yet, but legal action could eventually come.That robot sounds just like you
First, OpenAI tackled text with ChatGPT, then images with DALL-E. Next, it announced Sora, its text-to-video platform. But perhaps the most pernicious technology is what might come next: text-to-voice. Not just audio — but specific voices.
A group of OpenAI clients is reportedly testing a new tool called Voice Engine, which can mimic a person’s voice based on a 15-second recording, according to the New York Times. And from there it can translate the voice into any language.
The report outlined a series of potential abuses: spreading disinformation, allowing criminals to impersonate people online or over phone calls, or even breaking voice-based authenticators used by banks.
In a blog post on its own site, OpenAI seems all too aware of the potential for misuse. Its usage policies mandate that anyone using Voice Engine obtain consent before impersonating someone else and disclose that the voices are AI-generated, and OpenAI says it’s watermarking all audio so third parties can detect it and trace it back to the original maker.
But the company is also using this opportunity to warn everyone else that this technology is coming, including urging financial institutions to phase out voice-based authentication.
AI voices have already wreaked havoc in American politics. In January, thousands of New Hampshire residents received a robocall from a voice pretending to be President Joe Biden, urging them not to vote in the Democratic primary election. It was generated using simple AI tools and paid for by an ally of Biden's primary challenger Dean Phillips, who has since dropped out of the race.
In response, the Federal Communications Commission clarified that AI-generated robocalls are illegal, and New Hampshire’s legislature passed a law on March 28 that requires disclosures for any political ads using AI.
So, what makes this so much more dangerous than any other AI-generated media? The imitations are convincing. The Voice Engine demonstrations so far shared with the public sound indistinguishable from the human-uttered originals — even in foreign languages. But even the Biden robocall, which its maker admitted was made for only $150 with tech from the company ElevenLabs, was a good enough imitation.
But the real danger lies in the absence of other indicators that the audio is fake. With every other AI-generated media, there are clues for the discerning viewer or reader. AI text can feel clumsily written, hyper-organized, and chronically unsure of itself, often refusing to give real recommendations. AI images often have a cartoonish or sci-fi sheen, depending on their maker, and are notorious for getting human features wrong: extra teeth, extra fingers, and ears without lobes. AI video, still relatively primitive, is infinitely glitchy.
It’s conceivable that each of these applications for generative AI improves to a point where they’re indistinguishable from the real thing, but for now, AI voices are the only iteration that feels like it could become utterly undetectable without proper safeguards. And even if OpenAI, often the first to market, is responsible, that doesn’t mean all actors will be.
The announcement of Voice Engine, which doesn’t have a set release date, as such, feels less like a product launch and more like a warning shot.
Does AI’s power problem have a nuclear solution?
Sam Altman, the co-founder and CEO of OpenAI, has broad ambitions to solve all of the problems of AI, from algorithms to high-tech chips. But there’s one more problem on his plate: energy. Altman is backing a series of companies that hope to find a way to power the revolutionary tech, literally.
One of the startups Altman invested in is called Oklo, which is building a nuclear power plant in Idaho that could eventually power energy-guzzling data centers that AI depends on, but there is no clear public timeline for the project. Google and Microsoft have also partnered with nuclear power firms for their energy needs.
Nuclear energy comes with risks, of course, and Oklo has had trouble with regulators, which rejected applications in the past based on the lack of safety and security information provided. But going nuclear — if companies like Oklo can get it right — is also a cleaner alternative to more carbon-emitting energy sources.Musk takes OpenAI to court
Tesla CEO Elon Musk sued OpenAI and its CEO Sam Altman late last week, saying that they breached the terms of a contract by prioritizing their profits over the public good. In 2015, Musk helped found and fund OpenAI, the artificial intelligence research lab-turned-industry leader. He resigned as co-chair of the company’s nonprofit board of directors in 2018, citing conflicts of interest with his own company, Tesla, which was investing heavily in AI.
Now, Musk alleges that OpenAI violated the terms under which he gave money to OpenAI, but no one seems to have written down those terms.
The Verge points out that the complaint hinges on the violation of a “Founding Agreement,” an alleged oral contract that Musk feels was formed in the course of business discussions. If a court finds that a contract was formed – and courts aren’t usually friendly to oral contracts – Musk is requesting that the court compel OpenAI to revert back to its original nonprofit mission, including making research data publicly available, instead of the profit-motivated one that’s turned it into a $80 billion juggernaut.
There’s one other thing that Musk-watchers should keep in mind: Musk currently runs an AI startup of his own, xAI, which has a chatbot called Grok. This means his business directly competes with OpenAI. Is it any wonder he’s resorting to litigation that could take OpenAI down a peg?
OpenAI’s Altman incident under investigation
Two investigations may soon shed light on one of the biggest mysteries in Silicon Valley: Why was Sam Altman fired from OpenAI?
To recap, the OpenAI board fired Altman in November, saying he was not “consistently candid in his communications,” but it failed to provide specifics (the big mystery). OpenAI’s staff and lead investor, Microsoft, immediately protested the ouster and successfully campaigned for Altman’s reinstatement – and for fresh faces on the nonprofit board.
The US Securities and Exchange Commission is now investigating whether OpenAI misled its investors in firing Altman. Meanwhile, the law firm WilmerHale is conducting an internal investigation of the Altman firing and will soon present its findings to the current board of directors, which commissioned the review.
Altman’s alleged deceit may have something to do with his plans to raise trillions of dollars for a chip venture, something that’s come to light in the months since this debacle. We have our ear to the ground for where the investigations are headed, and what it could mean for the giant of genAI.Hard Numbers: It’s electric, OpenAI’s billions, AI-related legislation, Fred Trump ‘returns,’ Multiplication problems
1,300: Training a large language model is estimated to use about 1,300 megawatt hours of electricity. It’s about the same consumption of 130 US homes for one year. But that’s for the last generation of LLMs, like OpenAI’s GPT-3. The potential electricity usage for GPT-4, the current model, and beyond could be much, much greater.
80 billion: OpenAI struck a deal that would value the ChatGPT maker at $80 billion, making it one of the world’s most valuable private companies. It’s not a traditional fundraising round but a tender offer that allows employees to cash out their much sought-after shares in the company.
50: US states are clamoring to pass legislation to curb the worst effects of AI. By one measure, there are about 50 new AI-related bills introduced to state legislatures each week. New York leads the charge with about 65 outstanding bills, including a new one recently proposed by Gov. Kathy Hochul to criminalize deceptive AI.
1999: Fred Trump, the father of former President Donald Trump, died in 1999. But now, the Lincoln Project, the anti-Trump political action committee, has used AI to reanimate the elder Trump for a new ad in which he appears to call his son a “disgrace.”
44: The education company Khan Academy made a ChatGPT-based tutoring bot called Khanmigo. The problem? It’s terrible at math, unable to calculate 343 minus 17. The chatbot is being piloted by 65,000 students in 44 school districts. One Yale professor who studies AI put it bluntly: “Asking ChatGPT to do math is sort of like asking a goldfish to ride a bicycle.”