We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Google’s spending spree, Going corporate, Let’s see a movie, Court-ordered AI ban, Energy demands
100 billion: AI is a priority for many of Silicon Valley’s top companies — and it’s a costly one. Google DeepMind chief Demis Hassabis said that the tech giant plans to spend more than $100 billion developing artificial intelligence. That’s the same amount that rival Microsoft is expected to spend in building an AI-powered supercomputer, nicknamed Stargate.
72.5: The free market is dominating the AI game: Of the foundation models released between 2019 and 2023, 72.5% of them originated from private industry, according to a new Staford report. 108 models were released by companies, as opposed to 28 from academia, nine from an industry-academia collaboration, and four from government. None at all were released through a collaboration between government and industry.
5: The A24 film Civil War has garnered considerable controversy for its content, but its promotion is under scrutiny as well. Five posters for the film were created using artificial intelligence and depict scenes that never occur in the narrative. That’s kicked off a debate about the ethics of using AI in film marketing as well as questions of whether this is false advertising for the movie itself.
1,000: A sex offender in the UK who was found to have created 1,000 indecent images of children was banned from using any “AI creating tools” for five years by a British court. It’s not clear if he was actually using AI to create the illegal images in question, or if the order is peremptory, but it could serve as a model for future punishment in UK cases in the future. Meanwhile, on April 23, a group of AI companies including Google, Meta, and OpenAI, pledged to better prevent their tools from creating sexualized images of children and other exploitative material.
4.5: Salesforce is calling on AI companies to disclose the energy efficiency and carbon footprint of their models, and asking legislators to pass new laws aimed at demanding transparency and reducing the total energy consumption of AI. Salesforce’s best estimates put the total power generation demands of global data centers at 1.5% but warn that that figure could increase to 4.5% in the coming years absent intervention.Meta’s AI full-court press
If you use any Meta product — Facebook, Instagram, WhatsApp, or Messenger — buck up for an onslaught of AI. The social media giant is rolling out AI-powered assistants across its apps in unavoidable ways.
Meta’s AI, quite simply, will be everywhere: in your searches, conversations with friends, and chiming to conversations on Facebook groups. It’s powered by the company’s LLaMA 3 model, and is meant to help you answer questions or complete tasks — whatever you want, really. GZERO searched for Thai food on Instagram and instantly initiated a conversation with the Meta AI chatbot. (It gave five good options nearby.)
Meta has taken an open-source approach to developing artificial intelligence, releasing its powerful model for the world to use. That’s different from rivals like OpenAI, which charge consumers and companies to use their closed-source tech.
Now, it’s putting its models to use in a bid to ensure you spend as much time on its platforms as possible. Meta’s bread and butter, as an advertising giant, is attention. If you don’t need to leave Instagram to Google something, or write something with ChatGPT, that’ll quickly mean more money for Meta.
If users aren’t so horribly annoyed or creeped out that they disengage completely, that is. 404 Media reported that Meta’s AI told a parents group on Facebook that it has a disabled-yet-gifted child before the company received complaints and removed the comments. And, for people who want to opt out entirely, it doesn’t help that currently there’s no real way to turn the AI off either.AI labels are coming to Instagram and Facebook. Will they work?
Sir Nick Clegg, president of global affairs at Meta, the parent company of Facebook, Instagram, and Threads, announced Tuesday their platforms would begin labeling AI-generated images.
Meta is working with AI image generators like Midjourney and Shutterstock to add metadata to images that have been created by artificial intelligence, which will then automatically trigger a label when posted. Clegg framed it as a crucial safety measure and said the company would build the technology over the next year.
There are some drawbacks. First, the technology won’t work on video or audio yet, but Clegg says Meta will take down any unlabelled AI-generated clip that “creates a particularly high risk of materially deceiving the public on a matter of importance.”
Second, even still images may be able to get around Meta’s detector by doing something as simple as processing it through photo editing software to generate new metadata, according to experts.
And as far as AI-generated text, Clegg says it would be pointless to try to identify and label it all. “That ship has sailed,” he told Reuters.
Hard Numbers: Not-so-Swift, Job cuts, Microsoft’s milestone, Meta goes to Indiana, Blocking bots
45 million: AI-generated pornographic images of Taylor Swift circulated around social media sites last week, spurring Swift’s team to contemplate legal action. On X, formerly Twitter, one such post had 45 million views before it was finally removed for violating the site’s rules.
8,000: Tech companies are slashing jobs to invest in AI. The German software firm SAP announced it plans to cut or restructure 8,000 jobs — training some of the employees to work alongside AI.
3 trillion: SAP isn’t alone: Microsoft cut 1,900 jobs from its video game business just as AI has pushed its market capitalization past the $3 trillion mark. Yes, Microsoft, which has spent $13 billion investing in OpenAI in addition to its internal work on AI, is the most valuable company in the world.
800 million: Facebook parent company Meta announced it is building an $800 million data center in Jeffersonville, Indiana, to support its AI efforts. We detailed Meta’s controversial ambitions to build open-source AGI, or artificial general intelligence, in last week’s newsletter.
90: News companies are pushing back against AI companies training their models on their articles — at least not without proper payment. More than 90% of top news organizations, according to one estimate, have protections in place to stop data collection bots.
Ian Bremmer: On AI regulation, governments must step up to protect our social fabric
Seven leading AI companies, including Google, Meta and Microsoft, committed to managing risks posed by the technology, after holding discussions with the US government last May—a landmark move that Ian Bremmer sees as a win.
Speaking in a GZERO Global Stage discussion from the 2024 World Economic Forum in Davos, Switzerland, Eurasia Group and GZERO Media President Ian Bremmer calls tech firms' ongoing conversations with regulators on AI guardrails a "win" but points out that a big challenge with regulation will be that there is no one-size-fits-all strategy, as AI impacts different sectors differently. For example, ensuring AI can’t be used to make a weapon is important, “but I want to test these things on societies and on children before we roll them out,” he says.
“We would've benefited from that with social media,” he added.
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How is the world tackling AI, Davos' hottest topic?
- The geopolitics of AI ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa ›
- Ian Bremmer: How AI may destroy democracy ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- EU AI regulation efforts hit a snag ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
Why Meta opened up
Last week, Meta CEO Mark Zuckerberg announced his intention to build artificial general intelligence, or AGI — a standard whereby AI will have human-level intelligence in all fields – and said Meta will have 350,000 high-powered NVIDIA graphics chips by the end of the year.
Zuckerberg isn’t alone in his intentions – Meta joins a long list of tech firms trying to build a super-powered AI. But he is alone in saying he wants to make Meta’s AGI open-source. “Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. Um, everyone?
Critics have serious concerns with the advent of the still-hypothetical AGI. Publishing such technology on the open web is a whole other story. “In the wrong hands, technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.” University of Southampton professor Wendy Hall, who advises the UN on AI issues, told The Guardian. She added that it is “really very scary” for Zuckerberg to even consider it.
Unpacking Meta’s shift in AI focus
Meta has been developing artificial intelligence for more than a decade. The company first hired the esteemed academic Yann LeCun to helm a research lab originally called FAIR, or Facebook Artificial Intelligence Research, and now called Meta AI. LeCun, a Turing Award-winning computer scientist, splits his time between Meta and his professorial post at New York University.
But even with LeCun behind the wheel, most of Meta’s AI work was meant to supercharge its existing products — namely, its social media platforms, Facebook and Instagram. That included the ranking and recommendation algorithms for the apps’ news feeds, image recognition, and its all-important advertising platform. Meta makes most of its money on ads, after all.
While Meta is a closed ecosystem for users posting content or advertisers buying ad space, they’re considerably more open on the technical side. “They're a walled garden for advertisers, but they've always pitched themselves as an open platform when it comes to tech,” said Yoram Wurmser, a principal analyst at Insider Intelligence. “They explicitly like to differentiate themselves in that regard from other tech companies, particularly Apple, which is very guarded about their software platforms.” Differentiation like that can help Meta attract talent from elsewhere in Silicon Valley, but especially from academia, where open-source publishing is the standard – as opposed to proprietary research that might never even see the light of day.
Opening the door
In building its generative AI models early last year, the decision to go open-source, publishing the code of its LLaMA language model for all to use, was born out of FOMO (fear of missing out) and frustration. In early 2023, OpenAI was getting all of the buzz for its groundbreaking chatbot ChatGPT, and Meta — a Silicon Valley stalwart that’s been in the AI game for more than a decade — reportedly felt left behind.
So LeCun proposed going open-source for its large language model (once called Genesis and renamed to the infinitely more catchy LLaMA). Meta’s legal team cautioned it could put Meta further in the crosshairs of regulators, who might be concerned about such a powerful codebase living on the open internet, where bad actors — criminals and foreign adversaries — could leverage it. Feeling the heat and the urgency of the moment for attracting talent, hype, and investor fervor, Zuckerberg agreed with LeCun, and Meta released its original LLaMA model in February 2023. Meta has since released LLaMA 2 in partnership with OpenAI backer Microsoft in July, and has publicly confirmed it’s working on the next iteration, LLaMA 3.
Pros and cons of being an open book
Meta is one of the few AI-focused firms currently making their models open-source. There’s also the US-based startup HuggingFace, which oversaw the development of a model called Bloom, and the French firm Mistral AI, which has multiple open-source models. But Meta is the only established Silicon Valley giant pursuing this high-risk route head-on.
The potential reward is clear: Open-source development might help Meta attract top engineers, and its accessibility could make it the default system for tinkerers unwilling or unable to shell out for enterprise versions of OpenAI’s GPT-4. “It also gets a lot of people to do free labor for Meta,” said David Evan Harris, a public scholar at UC Berkeley and a former research manager for responsible AI at Meta. “It gets a lot of people to play with that model, find ways to optimize it, find ways of making it more efficient, find ways of making it better.” Open-source software encourages innovation and can enable smaller companies or independent developers to build out new applications that might’ve been cost-prohibitive otherwise
But the risk is clear too: When you publish software on the internet, anyone can use it. That means criminals could use open models to perpetuate scams and fraud, and generate misinformation or non-consensual sexual material. And, of pressing interest to the US, foreign adversaries will have unfettered access too. Harris says that an open-source language model is a “dream tool” for people trying to sow discord around elections further, deceive voters, and instill distrust in reliable democratic systems.
Regulators have already expressed concern: US Sens. Josh Hawley and Richard Blumenthal sent a letter to Meta last summer demanding answers about its language model. “By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards,” they wrote.
The Biden administration directed the Commerce Department in its October AI executive order to investigate the risk of “widely available” models. “When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” the order says.
Open-source purists might say that what Meta is doing is not truly open-source because it has usage restrictions: For example, they don’t allow the model to be used by companies with 700 million monthly users without a license or by anyone who doesn’t disclose “known dangers” to users. But these restrictions are merely warnings without a real method of enforcement, Harris says: “The threat of lawsuit is the enforcement.”
That might deter Meta’s biggest corporate rivals, such as Google or TikTok, from pilfering the company’s code to boost their own work, but it’s unlikely to deter criminals or malicious foreign actors.
Meta is reorienting its ambitions around artificial intelligence. Yes, Meta has bet big on the metaverse, an all-encompassing digital world powered by virtual and augmented reality technology, going so far as to change its official name from Facebook to reflect its ambitions. But the metaverse hype has been largely replaced by AI hype, and Meta doesn’t want to be left behind — certainly not for something it’s been working on for a long time.
Hard Numbers: xAI's Musk money, Investing in Replicate, Undressing AI, AFL-CIO-Google?, NVIDIA’s big gamble
$40 million: AI startup Replicate raised $40 million last week from investors such as Andreessen Horowitz. The company maintains an extensive library of 25,000 open-source models on its platform, all of which are available for developers to tinker with, including Meta’s large language model LLaMA and Stability AI’s Stable Diffusion 2.0. These open models serve as a counter to proprietary — or closed-source — models like OpenAI’s GPT-4.
24 million: AI has been a major tool for computer-generated nonconsensual pornography — a problem that disproportionately affects women. In September alone, 24 million people visited websites that gave them the ability to “undress” — or “nudify” — people in photographs using machine-learning technology.
12.5 million: Google just announced a partnership with the AFL-CIO, one of the most influential US labor unions representing 12.5 million workers. The goal of the partnership is to start an “open dialogue” about how AI might impact the workforce. Microsoft also committed to providing AI training for AFL-CIO members and agreed to include AI-related language in a union contract covering hundreds of workers at ZeniMax, a video game studio it owns. The language dictates that Microsoft is meant to use AI only to “treat all people fairly.”
35: NVIDIA invested in 35 firms this year as the race for AI dominance heated up, making it the most active large-scale investor in the space. That coincided with a year of staggering growth for the US chipmaker, which saw its stock rise 225% and its market capitalization exceed $1 trillion.Canada averts a Google news block, US bills in the works
The act, which is modeled on Australian legislation, led Google to threaten to de-index news from its search engine. In protest of the law, Meta, the parent company of Facebook and Instagram, blocked links to Canadian news in the country on both platforms. It’s currently holding out on a deal as Heritage Minister Pascale St-Onge tries to get the company back to the bargaining table.
The Online News Act kerfuffle is a symptom of a bigger issue: the power of governments to regulate large tech firms – a fight that is playing out in Canada, the US, and around the world. California is considering a law similar to Australia's and Canada’s. The bill passed the Assembly but is now on hold in the state senate until 2024. In March, a bipartisan group of lawmakers, led by Sens. Mike Lee and Amy Klobuchar, introduced a similar bill in the Senate, casting it as an anti-trust, pro-competition measure. Meta has made similar threats to pull news in response to the US push to mirror the Australian and Canadian laws.
Tech giants are resisting attempts to extract funds from them to support news media, a tactic that is part of a broader strategy to oppose regulation. But the Australian and Canadian successes may encourage California, the US Congress, and other states to move forward with similar efforts. The coming months will be a test of whether governments are able – and willing – to regulate these powerful companies. All eyes should be on the progress, or not, of the California and Congressional bills along with Canada’s negotiations with Meta since these cases will help decide the future of tech regulation itself.