Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How will Trump 2.0 impact AI?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the five broad worries of the implication of the US election on artificial intelligence.
I spent the past week in the UK and Europe talking to a ton of people in the tech and democracy community. And of course, everybody just wanted to talk about the implications of the US election. It's safe to say that there's some pretty grave concerns, so I thought I could spend a few minutes, a few more than I usually do in these videos outlining the nature and type of these concerns, particularly amongst those who are concerned about the conflation of power between national governments and tech companies. In short, I heard five broad worries.
First, that we're going to see an unprecedented confluence of tech power and political power. In short, the influence of US tech money is going to be turbocharged. This, of course, always existed, but the two are now far more fully joined. This means that the interests of a small number of companies will be one in the same as the interests of the US government. Musk's interests, Tesla, Starlink, Neuralink are sure to be front and center. But also companies like Peter Thiel's Palantir and Palmer Luckey's Anduril are likely to get massive new defense contracts. And the crypto investments of some of Silicon Valley's biggest VCs are sure to be boosted and supported.
The flip side of this concentrated power to some of Silicon Valley's more libertarian conservatives is that tech companies on the wrong side of this realignment might find trouble. Musk adding Microsoft to his OpenAI lawsuit is an early tell of this. It'll be interesting to see where Zuckerberg and Bezos land given Trump's animosity to both.
Second, for democratic countries outside of the US, we're going to see a severe erosion of digital governance sovereignty. Simply put, it's going to become tremendously hard for countries to govern digital technologies including online platforms, AI, biotech, and crypto in ways that aren't aligned with US interests. The main lever that the Trump administration has to pull in this regard are bilateral trade agreements. These are going to be the big international sticks that are likely to overwhelm tech policy enforcement and new tech policy itself.
In Canada, for example, our News Media Bargaining Code, our Online Streaming Act and our Digital Services Tax are all already under fire by US trade disputes. When the USMCA is likely reopened, expect for these all to be on the table, and for the Canadian government, whoever is in power to fold, putting our reliance on US trade policy over our digital policy agenda. The broader spillover effect of this trade pressure is that countries are unlikely to develop new digital policies over the time of the Trump term. And for those policies that aren't repealed, enforcement of existing laws are likely to be slowed down or halted entirely. Europe, for example, is very unlikely to enforce Digital Services Act provisions against X.
Third, we're likely to see the silencing of US researchers and civil society groups working in the tech and democracy space. This will be done ironically in the name of free speech. Early attacks from Jim Jordan against disinformation researchers at US universities are only going to be ramped up. Marc Andreessen and Musk have both called for researchers working on election interference and misinformation to be prosecuted. And Trump has called for the suspension of nonprofit status to universities that have housed this work.
Faced with this kind of existential threat, universities are very likely to abandon these scholars and their labs entirely. Civil society groups working on these same issues are going to be targeted and many are sure to close under this pressure. It's simply tragic that efforts to better understand how information flows through our digital media ecosystem will be rendered impossible right at the time when they're needed the most. At a time when the health and the integrity of our ecosystem is under attack. All in the name of protecting free speech. this is Kafka-esque to say the least.
Fourth, and in part as a result of all of the above, internationally, we may see new political space opened up for conversations about national communications infrastructure. For decades, the driving force in the media policy debate has been one of globalization and the adoption of largely US-based platforms. This argument has provided real headwind to those who, like in previous generations, urged for the development of national capacities and have protectionist media policy. But I wonder how long the status quo is tenable in a world where the richest person in the world owns a major social media platform and dominates global low-orbit broadband.
Does a country like Canada, for example, want to hand our media infrastructure over to a single individual? One who has shown careless disregard for the one media platform he already controls and shapes? Will other countries follow America's lead if Trump sells US broadcast licenses and targets American journalism? Will killing Section 230 as Trump has said to want to do, and the limits that that will place on platforms moderating even the worst online abuse, further hasten the enforcement of national digital borders?
Fifth and finally, how things play out for AI is actually a bit of a mystery, but I'm sure will likely err on the side of unregulated markets. While Musk may have at once been a champion of AI regulation and had legitimate concerns about unchecked AGI, he now seems more concerned about the political bias of AI than about any sort of existential risk. As the head of a new government agency mandated to cut a third of the federal government budget, Musk is more likely to see AI as a cheap replacement for human labor than as a threat that needs a new agency to regulate.
In all of this, one thing is for certain, we really are in for a bumpy ride. For those that have been concerned about the relationship between political and tech power for well over a decade, our work has only just begun. I'm Taylor Owen and thanks for watching.
Posting this message won’t save you from Meta AI
If you’ve been on Facebook recently, you might have seen friends or even celebrities posting about Meta’s artificial intelligence. A viral message reads like this:
“Goodbye, Meta AI. Please note that an attorney has advised us to put this on; failure to do so may result in legal consequences. As Meta is now a public entity, all members must post a similar statement. If you do not post at least once, it will be assumed you are OK with them using your information and photos. I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.”
This message is legally bunk. Posting an image with these words offers people no legal protections against Meta or how it uses your data for training its AI. Additionally, Meta is only public in the sense that it’s been a publicly traded company on the Nasdaq stock market since 2012.
So, how can you actually opt out? Well, if you’re in the US, you can’t. In Europe and the UK, where there are privacy laws, you can follow these helpful instructions published by MIT Technology Review to keep what you post out of Meta’s training algorithms.
Opinion: Pavel Durov, Mark Zuckerberg, and a child in a dungeon
Perhaps you have heard of the city of Omelas. It is a seaside paradise. Everyone there lives in bliss. There are churches but no priests. Sex and beer are readily available but consumed only in moderation. There are carnivals and horse races. Beautiful children play flutes in the streets.
But Omelas, the creation of science fiction writer Ursula Le Guin, has an open secret: There is a dungeon in one of the houses, and inside it is a starving, abused child who lives in its own excrement. Everyone in Omelas knows about the child, who will never be freed from captivity. The unusual, utopian happiness of Omelas, we learn, depends entirely on the misery of this child.
That’s not the end of the tale of Omelas, which I’ll return to later. But the story's point is that it asks us to think about the prices we’re willing to pay for the kinds of worlds we want. And that’s why it’s a story that, this week at least, has a lot to do with the internet and free speech.
On Saturday, French police arrested Pavel Durov, the Russian-born CEO of Telegram, at an airport near Paris.
Telegram is a Wild West sort of messaging platform, known for lax moderation, shady characters, and an openness to dissidents from authoritarian societies. It’s where close to one billion people can go to chat with family in Belarus, hang out with Hamas, buy weapons, plot Vladimir Putin’s downfall, or watch videos of Chechen warlord Ramzan Kadyrov shooting machine guns at various rocks and trees.
After holding Durov for three days, a French court charged him on Wednesday with a six-count rap sheet and released him on $6 million bail. French authorities say Durov refused to cooperate with investigations of groups that were using Telegram to violate European laws: money laundering, trafficking, and child sexual abuse offenses. Specifically, they say, Telegram refused to honor legally obtained warrants.
A chorus of free speech advocates has rushed to his defense. Chief among them is Elon Musk, who responded to Durov’s arrest by suggesting that, within a decade, Europeans will be executed for merely liking the wrong memes. Musk himself is in Brussels’ crosshairs over whether X moderates content in line with (potentially subjective) hate speech laws.
Somewhat less convincingly, the Kremlin – the seat of power in a country where critics of the government often wind up in jail, in exile, or in a pine box – raised the alarm about Durov’s arrest, citing it as an assault on freedom of speech.
I have no way of knowing whether the charges against Durov have merit. That will be up to the French courts to prove. And it is doubtless true that Telegram provides a real free speech space in some truly rotten authoritarian societies (I won’t believe the rumors of Durov’s collusion with the Kremlin until they are backed by something more than the accident of his birthplace.)
But based on what we do know so far, the free speech defense of Durov comes from a real-world kind of Omelas.
Even the most ferocious free speech advocates understand that there are reasonable limitations. Musk himself has said X will take down any content that is “illegal.”
Maybe some laws are faulty or stupid. Perhaps hate speech restrictions really are too subjective in Europe. But if you live in a world where the value of free speech on a platform like Telegram is so high that it should be functionally immune from laws that govern, say, child abuse, then you are picking a certain kind of Omelas that, as it happens, looks very similar to Le Guin’s. A child may pay the price for the utopia that you want.
But at the same time, there’s another Omelas to consider.
On Tuesday, Mark Zuckerberg sent a letter to Congress in which he admitted that during the pandemic, he had bowed to pressure from the Biden administration to suppress certain voices who dissented from the official COVID messaging.
Zuck said he regretted doing so – the sense being that the banned content wasn’t, in hindsight, really worth banning – and that his company would speak out “more forcefully” against government pressure next time.
Just to reiterate what he says happened: The head of the world’s most powerful government got the head of the world’s most powerful social media company to suppress certain voices that, in hindsight, shouldn’t have been suppressed. You do not have to be part of the Free Speech Absolutist Club™ to be alarmed by that.
It’s fair to say, look, we didn’t know then what we later learned about a whole range of pandemic policies on masking, lockdowns, school closures, vaccine efficacy, and so on. And there were plenty of absolutely psychotic and dangerous ideas floating around, to be sure.
What’s more, there are plenty of real problems with social media, hate, and violence – the velocity of bad or destructive information is immense, and the profit incentives behind echo-chambering turn the marketplace of ideas into something more like a food court of unchecked grievances.
But in a world where the only way we know how to find the best answers is to inquire and critique, governments calling audibles on what social media sites can and can’t post is a road to a dark place. It’s another kind of Omelas – a utopia of officially sanitized “truths,” where a person with a different idea about what’s happening may find themselves locked away.
At the end of Le Guin’s story, by the way, something curious happens. A small number of people make a dangerous choice. Rather than live in a society where utopia is built on a singular misery, they simply leave.
Unfortunately, we don’t have this option. We are stuck here.
So what’s the right balance between speech and security that won’t leave anyone in a dungeon?
Why Meta opened up
Last week, Meta CEO Mark Zuckerberg announced his intention to build artificial general intelligence, or AGI — a standard whereby AI will have human-level intelligence in all fields – and said Meta will have 350,000 high-powered NVIDIA graphics chips by the end of the year.
Zuckerberg isn’t alone in his intentions – Meta joins a long list of tech firms trying to build a super-powered AI. But he is alone in saying he wants to make Meta’s AGI open-source. “Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. Um, everyone?
Critics have serious concerns with the advent of the still-hypothetical AGI. Publishing such technology on the open web is a whole other story. “In the wrong hands, technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.” University of Southampton professor Wendy Hall, who advises the UN on AI issues, told The Guardian. She added that it is “really very scary” for Zuckerberg to even consider it.
Unpacking Meta’s shift in AI focus
Meta has been developing artificial intelligence for more than a decade. The company first hired the esteemed academic Yann LeCun to helm a research lab originally called FAIR, or Facebook Artificial Intelligence Research, and now called Meta AI. LeCun, a Turing Award-winning computer scientist, splits his time between Meta and his professorial post at New York University.
But even with LeCun behind the wheel, most of Meta’s AI work was meant to supercharge its existing products — namely, its social media platforms, Facebook and Instagram. That included the ranking and recommendation algorithms for the apps’ news feeds, image recognition, and its all-important advertising platform. Meta makes most of its money on ads, after all.
While Meta is a closed ecosystem for users posting content or advertisers buying ad space, they’re considerably more open on the technical side. “They're a walled garden for advertisers, but they've always pitched themselves as an open platform when it comes to tech,” said Yoram Wurmser, a principal analyst at Insider Intelligence. “They explicitly like to differentiate themselves in that regard from other tech companies, particularly Apple, which is very guarded about their software platforms.” Differentiation like that can help Meta attract talent from elsewhere in Silicon Valley, but especially from academia, where open-source publishing is the standard – as opposed to proprietary research that might never even see the light of day.
Opening the door
In building its generative AI models early last year, the decision to go open-source, publishing the code of its LLaMA language model for all to use, was born out of FOMO (fear of missing out) and frustration. In early 2023, OpenAI was getting all of the buzz for its groundbreaking chatbot ChatGPT, and Meta — a Silicon Valley stalwart that’s been in the AI game for more than a decade — reportedly felt left behind.
So LeCun proposed going open-source for its large language model (once called Genesis and renamed to the infinitely more catchy LLaMA). Meta’s legal team cautioned it could put Meta further in the crosshairs of regulators, who might be concerned about such a powerful codebase living on the open internet, where bad actors — criminals and foreign adversaries — could leverage it. Feeling the heat and the urgency of the moment for attracting talent, hype, and investor fervor, Zuckerberg agreed with LeCun, and Meta released its original LLaMA model in February 2023. Meta has since released LLaMA 2 in partnership with OpenAI backer Microsoft in July, and has publicly confirmed it’s working on the next iteration, LLaMA 3.
Pros and cons of being an open book
Meta is one of the few AI-focused firms currently making their models open-source. There’s also the US-based startup HuggingFace, which oversaw the development of a model called Bloom, and the French firm Mistral AI, which has multiple open-source models. But Meta is the only established Silicon Valley giant pursuing this high-risk route head-on.
The potential reward is clear: Open-source development might help Meta attract top engineers, and its accessibility could make it the default system for tinkerers unwilling or unable to shell out for enterprise versions of OpenAI’s GPT-4. “It also gets a lot of people to do free labor for Meta,” said David Evan Harris, a public scholar at UC Berkeley and a former research manager for responsible AI at Meta. “It gets a lot of people to play with that model, find ways to optimize it, find ways of making it more efficient, find ways of making it better.” Open-source software encourages innovation and can enable smaller companies or independent developers to build out new applications that might’ve been cost-prohibitive otherwise
But the risk is clear too: When you publish software on the internet, anyone can use it. That means criminals could use open models to perpetuate scams and fraud, and generate misinformation or non-consensual sexual material. And, of pressing interest to the US, foreign adversaries will have unfettered access too. Harris says that an open-source language model is a “dream tool” for people trying to sow discord around elections further, deceive voters, and instill distrust in reliable democratic systems.
Regulators have already expressed concern: US Sens. Josh Hawley and Richard Blumenthal sent a letter to Meta last summer demanding answers about its language model. “By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards,” they wrote.
The Biden administration directed the Commerce Department in its October AI executive order to investigate the risk of “widely available” models. “When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” the order says.
Open-source purists might say that what Meta is doing is not truly open-source because it has usage restrictions: For example, they don’t allow the model to be used by companies with 700 million monthly users without a license or by anyone who doesn’t disclose “known dangers” to users. But these restrictions are merely warnings without a real method of enforcement, Harris says: “The threat of lawsuit is the enforcement.”
That might deter Meta’s biggest corporate rivals, such as Google or TikTok, from pilfering the company’s code to boost their own work, but it’s unlikely to deter criminals or malicious foreign actors.
Meta is reorienting its ambitions around artificial intelligence. Yes, Meta has bet big on the metaverse, an all-encompassing digital world powered by virtual and augmented reality technology, going so far as to change its official name from Facebook to reflect its ambitions. But the metaverse hype has been largely replaced by AI hype, and Meta doesn’t want to be left behind — certainly not for something it’s been working on for a long time.
What We’re Ignoring: Revenge of the nerds
There’s growing evidence that the much-ballyhooed mixed martial arts battle between X-Man Elon Musk and Meta CEO Mark Zuckerberg may actually take place.
Musk first posted that he would be up for a cage match against Zuckerberg in June. Since then, the two moguls have traded multiple barbs on the topic. Now Zuckerberg, who trains in jiu jitsu, has shared a screenshot of a conversation with his wife Priscilla Chan in which he crows about installing a training cage in their backyard. (Her response: “I have been working on that grass for two years.”)
Not to be outdone, Musk posted to X that he is preparing for the fight by “lifting weights throughout the day,” and that the "Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans.”
Zuckerberg says he is "not holding his breath" because he offered a date of Aug. 26 but didn't hear back. No word yet on whether Threads will attempt a rival broadcast. Stay tuned. Or don’t.
NATO membership for Ukraine?
Ian Bremmer shares his insights on global politics this week on World In :60.
Sweden will join NATO. Is Ukraine next?
Well, sure, but next doesn't mean tomorrow. Next means like at some indeterminate point, which makes President Zelensky pretty unhappy and he's made that clear, but he has massive amounts of support from NATO right now, and he needs that support to continue. So, it's not like he has a lot of leverage on joining NATO. As long as the Americans are saying it's not going to happen, that means it's not going to happen. No, the real issue is how much and how concrete the multilateral security guarantees that can be provided by NATO to Ukraine actually turn out to be. We will be watching that space.
Is Taiwan readying itself for an invasion by conducting its biggest evacuation drills in years?
I wouldn't say readying for an invasion. I would say, you know, sort of preparing for every contingency, and that means taking care of your people. I mean, the Americans weren't readying themselves for nuclear Armageddon by doing drills in classrooms and by, you know, having bomb shelters, but they had them because we were in a world where nuclear war was thinkable. Well, we're in a world where Chinese, mainland Chinese invasion of Taiwan is very unlikely, but thinkable. And of course, the Taiwanese have to think about it a lot more than you and I do.
Elon vs. Zuck. Thoughts?
Well, my thoughts are mostly about the battle of the social media platforms and the fact that of course you now have the big gorilla in the room with a Twitter competitor. And I've seen it pretty functional for the first several days. Obviously, massive numbers of people are on it, mostly because it's really easy to sign up. They're all coming over from Instagram and it's owned by the same person, by the same shareholders. Unclear to me who's going to win. If I had to bet, I would say that within 6 or 12 months, we're going to have a fragmented social media landscape politically, the way we do blogosphere or cable news, which is, I guess, good for consumer choice, but it's bad for civil society. What else is new?
Hard Numbers: Strong Threads, Italian ‘Succession,’ Democracy dwindles in Hong Kong, and the Romanian port keeping the world in grain
53: Silvio Berlusconi, Italy’s former PM and richest man, died last month without leaving instructions for how his $7.6 billion fortune should be distributed throughout his family. Berlusconi never publicly declared a successor to his business empire spanning real estate, television, cinema, and sports, leaving his two eldest children to jointly own 53%. “Succession” season 5 anyone?
88: In another hit to democratic freedom in Hong Kong, its legislature voted to overhaul district-level elections, reducing the number of directly elected officials from 452 to 88. This decision effectively eliminates the pro-democracy faction of the government, which in the last election, humiliated the pro-Beijing camp, winning 90% of the seats.
27 million: The Romanian port of Constanta is preparing to handle 27 million tons of Ukrainian grain as the 2023 harvest begins. That’s more than double the amount it shipped annually before the Ukraine war, and it’s struggling to accommodate the influx in addition to Romania’s own grain exports.I take responsibility: world leaders edition
Perhaps you've seen some of the celebrity "I Take Responsibility" videos that have gone viral recently. Well, now some of our world leaders, including Trump, MBS, Putin and Bolsonaro, have jumped in on the act too, with their own twist.