Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Pakistan indicts Imran Khan (again), RFK wants polio vaccine revoked, India eyes one election, Australia charges big tech, Zuckerberg and Bezos make YUGE donations
200: Former Pakistani Prime Minister Imran Khan and his wife, Bushra Bibi, were indicted on Thursday on charges of unlawfully selling state gifts, including jewelry, at undervalued prices. They pleaded not guilty the same day, calling the charges politically motivated amid nearly 200 cases Khan has faced since his 2022 ouster. Khan and Bibi received 14-year sentences before this year’s election, but those terms were suspended on appeal following a prior three-year sentence in a related case.
14: A lawyer for Robert F. Kennedy Jr., Donald Trump's pick to helm the Department of Health and Human Services, has filed a petition to pause the distribution of 14 vaccines – including polo, hepatitis A, and other deadly diseases. The petition also asks the agency to revoke its polio vaccine approval and end COVID-19 vaccine mandates around the country.
1: India’s cabinet has approved legislation for simultaneous national and state elections, the first step in advancing Prime Minister Narendra Modi’s “One Nation One Election” plan. Supporters say it would put a stop to India’s state of “perpetual elections,” but critics argue it would favor the national ruling party, Modi’s BJP, in local races.
160,000,000: In its latest crackdown on Big Tech, Australia will charge social media giants like Meta and Google millions if they don’t pay local media for news content. All platforms with revenue over AU$160 million will be obliged to pay up, but charges will be offset by any commercial agreements voluntarily struck between the platforms and news media businesses.
1,000,000: Nothing says sorry quite like cold hard cash. Meta announced on Wednesday that it's donating $1 million to the inaugural fund of President-elect Donald Trump, and Amazon.com, not to be outdone, plans to do the same. The moves appear to be fence-mending gestures – or, as critics call them, attempts to curry favor. Meta founder Mark Zuckerberg's relationship with the president-elect soured after Facebook and Instagram suspended Trump’s accounts in 2021 for his praise of the Jan. 6 Capitol rioters, and Trump has been critical of Jeff Bezos for owning the Washington Post -- and the newspaper's political coverage.
Canada sues Google over ad tech – and it’s not alone
Canada wants Google to split two of its ad tech tools and pay an administrative penalty “equal to three times the value of the benefit derived from Google’s anti-competitive practices, or if that amount cannot be reasonably determined, 3% of Google’s worldwide gross revenues.” So, potentially a decent chunk of cash.
Google is facing suits all over the place right now as countries struggle to reign in the company.
In 2021, the US launched a suit against Google alleging it was subverting competition in the online ad space. That suit, similar to the Canadian case, is ongoing. The US is also looking to break up Google, demanding it sell off its Chrome browser after a judge ruled in a separate case that the company has a monopoly on internet searches.
The European Union is also fighting Google over its ad practices, leaving the company encircled, but nowhere near defeated as cases, appeals, and deal-making will drag on for months or years.Opinion: Pavel Durov, Mark Zuckerberg, and a child in a dungeon
Perhaps you have heard of the city of Omelas. It is a seaside paradise. Everyone there lives in bliss. There are churches but no priests. Sex and beer are readily available but consumed only in moderation. There are carnivals and horse races. Beautiful children play flutes in the streets.
But Omelas, the creation of science fiction writer Ursula Le Guin, has an open secret: There is a dungeon in one of the houses, and inside it is a starving, abused child who lives in its own excrement. Everyone in Omelas knows about the child, who will never be freed from captivity. The unusual, utopian happiness of Omelas, we learn, depends entirely on the misery of this child.
That’s not the end of the tale of Omelas, which I’ll return to later. But the story's point is that it asks us to think about the prices we’re willing to pay for the kinds of worlds we want. And that’s why it’s a story that, this week at least, has a lot to do with the internet and free speech.
On Saturday, French police arrested Pavel Durov, the Russian-born CEO of Telegram, at an airport near Paris.
Telegram is a Wild West sort of messaging platform, known for lax moderation, shady characters, and an openness to dissidents from authoritarian societies. It’s where close to one billion people can go to chat with family in Belarus, hang out with Hamas, buy weapons, plot Vladimir Putin’s downfall, or watch videos of Chechen warlord Ramzan Kadyrov shooting machine guns at various rocks and trees.
After holding Durov for three days, a French court charged him on Wednesday with a six-count rap sheet and released him on $6 million bail. French authorities say Durov refused to cooperate with investigations of groups that were using Telegram to violate European laws: money laundering, trafficking, and child sexual abuse offenses. Specifically, they say, Telegram refused to honor legally obtained warrants.
A chorus of free speech advocates has rushed to his defense. Chief among them is Elon Musk, who responded to Durov’s arrest by suggesting that, within a decade, Europeans will be executed for merely liking the wrong memes. Musk himself is in Brussels’ crosshairs over whether X moderates content in line with (potentially subjective) hate speech laws.
Somewhat less convincingly, the Kremlin – the seat of power in a country where critics of the government often wind up in jail, in exile, or in a pine box – raised the alarm about Durov’s arrest, citing it as an assault on freedom of speech.
I have no way of knowing whether the charges against Durov have merit. That will be up to the French courts to prove. And it is doubtless true that Telegram provides a real free speech space in some truly rotten authoritarian societies (I won’t believe the rumors of Durov’s collusion with the Kremlin until they are backed by something more than the accident of his birthplace.)
But based on what we do know so far, the free speech defense of Durov comes from a real-world kind of Omelas.
Even the most ferocious free speech advocates understand that there are reasonable limitations. Musk himself has said X will take down any content that is “illegal.”
Maybe some laws are faulty or stupid. Perhaps hate speech restrictions really are too subjective in Europe. But if you live in a world where the value of free speech on a platform like Telegram is so high that it should be functionally immune from laws that govern, say, child abuse, then you are picking a certain kind of Omelas that, as it happens, looks very similar to Le Guin’s. A child may pay the price for the utopia that you want.
But at the same time, there’s another Omelas to consider.
On Tuesday, Mark Zuckerberg sent a letter to Congress in which he admitted that during the pandemic, he had bowed to pressure from the Biden administration to suppress certain voices who dissented from the official COVID messaging.
Zuck said he regretted doing so – the sense being that the banned content wasn’t, in hindsight, really worth banning – and that his company would speak out “more forcefully” against government pressure next time.
Just to reiterate what he says happened: The head of the world’s most powerful government got the head of the world’s most powerful social media company to suppress certain voices that, in hindsight, shouldn’t have been suppressed. You do not have to be part of the Free Speech Absolutist Club™ to be alarmed by that.
It’s fair to say, look, we didn’t know then what we later learned about a whole range of pandemic policies on masking, lockdowns, school closures, vaccine efficacy, and so on. And there were plenty of absolutely psychotic and dangerous ideas floating around, to be sure.
What’s more, there are plenty of real problems with social media, hate, and violence – the velocity of bad or destructive information is immense, and the profit incentives behind echo-chambering turn the marketplace of ideas into something more like a food court of unchecked grievances.
But in a world where the only way we know how to find the best answers is to inquire and critique, governments calling audibles on what social media sites can and can’t post is a road to a dark place. It’s another kind of Omelas – a utopia of officially sanitized “truths,” where a person with a different idea about what’s happening may find themselves locked away.
At the end of Le Guin’s story, by the way, something curious happens. A small number of people make a dangerous choice. Rather than live in a society where utopia is built on a singular misery, they simply leave.
Unfortunately, we don’t have this option. We are stuck here.
So what’s the right balance between speech and security that won’t leave anyone in a dungeon?
China spends big on AI
Much of China’s AI industry is reliant on low-grade chips from US chipmaker Nvidia, which is barred from selling its top models because of US export controls. (For more on the US-China chip race, check out GZERO AI’s interview with Trump export control chief Nazak Nikakhtar from last week’s edition.)
What do Democrats want for AI?
At last week’s Democratic National Convention, the Democratic Party and its newly minted presidential candidate, Vice President Kamala Harris, made little reference to technology policy or artificial intelligence. But the party’s platform and a few key mentions at the DNC show how a Harris administration would handle AI.
In the official party platform, there are three mentions of AI: First, it says Democrats will support historic federal investments in research and development, break “new frontiers of science,” and create jobs in artificial intelligence among other sectors. It also says it will invest in “technology and forces that meet the threats of the future,” including artificial intelligence and unmanned systems.
Lastly, the Dems’ platform calls for regulation to bridge “the gap between the pace of innovation and the development of rules of the road governing the most consequential domains of technology.”
“Democrats will avoid a race to the bottom, where countries hostile to democratic values shape our future,” it notes.
Harris echoed that final point in her DNC keynote address. “I will make sure that we lead the world into the future on space and artificial intelligence,” she said. “That America, not China, wins the competition for the 21st century, and that we strengthen, not abdicate our global leadership.”
The Republican Party platform, by contrast, promises to repeal Biden’s 2023 executive order on AI, calling it “dangerous,” hindering innovation, and imposing “radical left-wing ideas” on the technology. “In its place, Republicans support AI development rooted in free speech and human flourishing,” it says. (The platform doesn’t go into specifics about how the executive order is harmful or what a free speech-oriented AI policy would entail.) In his RNC address, Donald Trump didn’t mention artificial intelligence or tech policy but talked at length about beating back China economically.
GZERO asked Don Beyer, the Virginia Democratic congressman going back to school to study artificial intelligence, what he thought of his party’s platform and Harris’ remarks on AI. Beyer said that Harris has struck the right balance between promoting American competitiveness and outlining guardrails to minimize the technology’s risks. “The vice president has been personally involved in many of the administration’s efforts to ensure American leadership in AI, from establishing the US AI Safety Institute to launching new philanthropic initiatives for public interest AI, and I expect her future administration to continue that leadership,” he said.Google Search is making things up
Google has defended its new feature, saying that these strange answers are isolated incidents. “The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web,” the tech giant told the BBC. The Verge reported that Google is manually removing embarrassing search results after users post what they find on social media.
This is Google’s second major faux pas in its quest to bring AI to the masses. In February, after it released its Gemini AI system, its image generator kept over-indexing for diverse images of individuals — even when doing so was wildly inappropriate. It spit out Black and Asian Nazi soldiers and Native Americans dressed in Viking garb.
The fact that Google is willing to introduce AI into its cash cow of a search engine signals it is serious about integrating the technology into everything it does. It’s even decided to introduce advertising into these AI Overviews. But the company is quickly finding out that when AI systems hallucinate, not only can that spread misinformation — but it can also make your product a public laughingstock.
Section 230 won’t be a savior for Generative AI
In the US, Section 230 of the Communications Decency Act has been called the law that “created the internet.” It provides legal liability protections to internet companies that host third-party speech, such as social media platforms that rely on user-generated content or news websites with comment sections. Essentially, it prevents companies like Meta or X from being on the hook when their users defame one another, or commit certain other civil wrongs, on their site.
In recent years, 230 has become a lightning rod for critics on both sides of the political aisle seeking to punish Big Tech for perceived bad behavior.
But Section 230 likely does not apply to generative AI services like ChatGPT or Claude. While this is still untested in the US courts, many legal experts believe that the output of such chatbots is first-party speech, meaning someone could reasonably sue a company like OpenAI or Anthropic over output, especially if it plays fast and loose with the truth.
Supreme Court Justice Neil Gorsuch suggested during oral arguments last year that AI chatbots would not be protected by Section 230. “Artificial intelligence generates poetry,” Gorsuch said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”
Without those protections, University of North Carolina professor Matt Perault noted in an essay in Lawfare, the companies behind LLMs are in a “compliance minefield.” They might be forced to dramatically narrow the scope and scale of how their products work if any “company that deploys [a large language model] can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk.”
We’ve already seen similar forces at play in the court of public opinion. Facing criticism around political misinformation, racist images, and deepfakes of politicians, many generative AI companies have limited what their programs are willing to generate – in some cases, outlawing political or controversial content entirely.
Lawyer Jess Miers of the industry trade group Chamber of Progress, however, argues in Techdirt that 230 should protect generative AI. She says that because the output depends “entirely upon whatever query or instructions its users may provide, malicious or otherwise,” the users should be the ones left holding the legal bag. But proving that in court would be an uphill battle, she concedes, in part because defendants would have the onerous task of explaining to judges how these technologies actually work.
The picture gets even more complex: Courts will also have to decide whether only the creators of LLMs receive Section 230 protections, or if companies using the tech on their own platforms are also covered, as Washington Post writer Will Oremuspondered on X last week.
In other words, is Meta liable if users post legally problematic AI-generated content on Facebook? Or what about a platform like X, which incorporates the AI tool Grok for its premium users?
Mark Lemley, a Stanford Law School professor, told GZERO that the liability holder depends on the law but that, generally speaking, the liability falls to whoever deploys the technology. “They may in turn have a claim against the company that designed [or] trained the model,” he said, “but a lot will depend on what, if anything, the deploying company does to fine-tune the model after they get it.”
These are all important questions for the courts to decide, but the liability issue for generative AI won’t end with Section 230. The next battle, of course, is copyright law. Even if tech firms are afforded some protections over what their models generate, Section 230 won’t protect them if courts find that generative AI companies are illegally using copyright works.
Who pays the price for a TikTok ban?
It’s a tough time to be an influencer in America.
TikTok’s future in the United States may be up against the clock after the House voted in favor of banning the popular social media app if its Chinese owner, ByteDance, doesn’t sell. President Joe Biden said he’d sign the bill if it reaches his desk, but it’s unclear whether the Senate will pass the legislation.
Biden and a good chunk of Congress are worried ByteDance is essentially an arm of the Chinese Communist Party. Do they have a point, or are they just fearmongering in an election year amid newly stabilized but precarious relations between Washington and Beijing?
All eyes on China
In 2017, China passed a national security law that allows Beijing to compel Chinese companies to share their data under certain circumstances. That law and others have US officials worried that China could collect information from TikTok on roughly 150 million US users. Pro-ban advocates also lament that the CCP has a seat on the ByteDance board, meaning the party has direct influence over the company.
Another worry: TikTok could push Chinese propaganda on Americans, shaping domestic politics and electoral outcomes at a time when US democracy is fragile. TikTok denies the accusations, and there’s no public evidence that China has used TikTok to spy on Americans.
Still, there is growing bipartisan support for taking on TikTok and its connections to China, says Xiaomeng Lu, director of geo-technology at Eurasia Group. And the public may not be privy to all of the motivations for banning the app. “We don’t know what the US intelligence community knows,” she says.
Incidentally, none of these security worries have stopped members of Congress who voted for the potential ban from using TikTok, while a few who voted against it – including Reps. Alexandria Ocasio-Cortez, Jamaal Bowman, Ilhan Omar, and Cori Bush – are users themselves.
In theory, the TikTok bill could apply to other apps – anything designated as being too close to foreign adversaries and a threat to the US or its interests. But TikTok and China are the main focus right now, and not just for the US ...
View up north
Canada banned TikTok from government phones in 2023, the same year Ottawa launched a security review of the wildly popular app without letting Canadians, 3.2 million of whom are users, know it was doing so.
Ottawa isn’t rushing to get ahead of Washington on this, so it could be a while before we see the results of the review. There’s no indication of any TikTok bill in the works, but there may be no need for one. The security review could lead to “enhanced scrutiny” of TikTok under the Investment Canada Act by way of a provision concerning digital media.
Canada would also have a hard time breaking from the US if it decides to deep-six TikTok given the extent to which the two countries are intertwined when it comes to national security.
Consequences of tanking TikTok
If there is a ban, critics are already warning of dire consequences. The economic impact could be substantial, especially for those who make a living on the app. That includes 7 million small and medium businesses in the US that contribute tens of billions of dollars to the country’s GDP, according to a report by Oxford Economics and TikTok. In Canada, TikTok has an ad reach of 36% among all adults. If app stores are forced to remove TikTok, it will be a blow to the influencer-advertising industrial complex that drives an increasingly large segment of the two economies.
There are also fears a ban will infringe on free speech rights, including the capacity for journalists to do their job and reach eyeballs. In 2022, 67% of US teens aged 13 to 17 used TikTok. In Canada, 14% of Canadians who used the internet were on TikTok, including 53% of connected 18-24-year-olds – which is the vast majority of them.
Meanwhile, there’s consternation that a ban would undermine US criticisms of foreign states, particularly authoritarian ones, for their censorship regimes. Some say an American ban would embolden authoritarians who would be keen to use the ban as justification for invoking or extending their crackdowns.
Big Tech could grow
A forced TikTok sale could also invite its own set of problems. Only so many entities are capable of purchasing a tech behemoth – Meta, Apple, and Alphabet. But if they hoovered up a competitor, there would be concerns about further entrenching the companies and inviting even more anti-competitive behavior among oligopolists. Also lost in the TikTok handwringing: Domestic tech companies pose their own surveillance and mis- or disinformation challenges to democracy and cohesion.
There are a lot of “ifs” between the bill passed by the House and a TikTok ban. The Senate isn’t in a rush to vote on it – doing so could take months – and if it does pass, it will almost certainly face a long series of court battles. If all of that happens and the law survives, ByteDance could in theory sell TikTok, but Beijing has said it would oppose a forced sale.
Meanwhile, there’s next to no chance Ottawa will try to force ByteDance to divest from TikTok or ban it if the US doesn’t move first. Doing so would just invite TikTok to bounce from Canada and its comparatively small market.
What about … elections?
The political consequences of a ban wouldn’t necessarily extend to the 2024 election. If young people are bumped from the money-making app, will they vote with their feet?
Graeme Thompson, a senior global macro-geopolitics analyst at Eurasia Group, is not convinced the move will affect votes. “To the extent that it affects the elections,” he says, “it may be more about communications and how political parties and candidates get their messages out on social media.”
But with young voters already souring on Biden over issues like Gaza, some congressional Democrats warn that moving forward with a ban could seriously hurt the president at the ballot box. Besides, even as the White House raises security concerns about TikTok, the Biden campaign is still using the app to reach voters.