We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Section 230 won’t be a savior for Generative AI
In the US, Section 230 of the Communications Decency Act has been called the law that “created the internet.” It provides legal liability protections to internet companies that host third-party speech, such as social media platforms that rely on user-generated content or news websites with comment sections. Essentially, it prevents companies like Meta or X from being on the hook when their users defame one another, or commit certain other civil wrongs, on their site.
In recent years, 230 has become a lightning rod for critics on both sides of the political aisle seeking to punish Big Tech for perceived bad behavior.
But Section 230 likely does not apply to generative AI services like ChatGPT or Claude. While this is still untested in the US courts, many legal experts believe that the output of such chatbots is first-party speech, meaning someone could reasonably sue a company like OpenAI or Anthropic over output, especially if it plays fast and loose with the truth.
Supreme Court Justice Neil Gorsuch suggested during oral arguments last year that AI chatbots would not be protected by Section 230. “Artificial intelligence generates poetry,” Gorsuch said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”
Without those protections, University of North Carolina professor Matt Perault noted in an essay in Lawfare, the companies behind LLMs are in a “compliance minefield.” They might be forced to dramatically narrow the scope and scale of how their products work if any “company that deploys [a large language model] can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk.”
We’ve already seen similar forces at play in the court of public opinion. Facing criticism around political misinformation, racist images, and deepfakes of politicians, many generative AI companies have limited what their programs are willing to generate – in some cases, outlawing political or controversial content entirely.
Lawyer Jess Miers of the industry trade group Chamber of Progress, however, argues in Techdirt that 230 should protect generative AI. She says that because the output depends “entirely upon whatever query or instructions its users may provide, malicious or otherwise,” the users should be the ones left holding the legal bag. But proving that in court would be an uphill battle, she concedes, in part because defendants would have the onerous task of explaining to judges how these technologies actually work.
The picture gets even more complex: Courts will also have to decide whether only the creators of LLMs receive Section 230 protections, or if companies using the tech on their own platforms are also covered, as Washington Post writer Will Oremuspondered on X last week.
In other words, is Meta liable if users post legally problematic AI-generated content on Facebook? Or what about a platform like X, which incorporates the AI tool Grok for its premium users?
Mark Lemley, a Stanford Law School professor, told GZERO that the liability holder depends on the law but that, generally speaking, the liability falls to whoever deploys the technology. “They may in turn have a claim against the company that designed [or] trained the model,” he said, “but a lot will depend on what, if anything, the deploying company does to fine-tune the model after they get it.”
These are all important questions for the courts to decide, but the liability issue for generative AI won’t end with Section 230. The next battle, of course, is copyright law. Even if tech firms are afforded some protections over what their models generate, Section 230 won’t protect them if courts find that generative AI companies are illegally using copyright works.
Who pays the price for a TikTok ban?
It’s a tough time to be an influencer in America.
TikTok’s future in the United States may be up against the clock after the House voted in favor of banning the popular social media app if its Chinese owner, ByteDance, doesn’t sell. President Joe Biden said he’d sign the bill if it reaches his desk, but it’s unclear whether the Senate will pass the legislation.
Biden and a good chunk of Congress are worried ByteDance is essentially an arm of the Chinese Communist Party. Do they have a point, or are they just fearmongering in an election year amid newly stabilized but precarious relations between Washington and Beijing?
All eyes on China
In 2017, China passed a national security law that allows Beijing to compel Chinese companies to share their data under certain circumstances. That law and others have US officials worried that China could collect information from TikTok on roughly 150 million US users. Pro-ban advocates also lament that the CCP has a seat on the ByteDance board, meaning the party has direct influence over the company.
Another worry: TikTok could push Chinese propaganda on Americans, shaping domestic politics and electoral outcomes at a time when US democracy is fragile. TikTok denies the accusations, and there’s no public evidence that China has used TikTok to spy on Americans.
Still, there is growing bipartisan support for taking on TikTok and its connections to China, says Xiaomeng Lu, director of geo-technology at Eurasia Group. And the public may not be privy to all of the motivations for banning the app. “We don’t know what the US intelligence community knows,” she says.
Incidentally, none of these security worries have stopped members of Congress who voted for the potential ban from using TikTok, while a few who voted against it – including Reps. Alexandria Ocasio-Cortez, Jamaal Bowman, Ilhan Omar, and Cori Bush – are users themselves.
In theory, the TikTok bill could apply to other apps – anything designated as being too close to foreign adversaries and a threat to the US or its interests. But TikTok and China are the main focus right now, and not just for the US ...
View up north
Canada banned TikTok from government phones in 2023, the same year Ottawa launched a security review of the wildly popular app without letting Canadians, 3.2 million of whom are users, know it was doing so.
Ottawa isn’t rushing to get ahead of Washington on this, so it could be a while before we see the results of the review. There’s no indication of any TikTok bill in the works, but there may be no need for one. The security review could lead to “enhanced scrutiny” of TikTok under the Investment Canada Act by way of a provision concerning digital media.
Canada would also have a hard time breaking from the US if it decides to deep-six TikTok given the extent to which the two countries are intertwined when it comes to national security.
Consequences of tanking TikTok
If there is a ban, critics are already warning of dire consequences. The economic impact could be substantial, especially for those who make a living on the app. That includes 7 million small and medium businesses in the US that contribute tens of billions of dollars to the country’s GDP, according to a report by Oxford Economics and TikTok. In Canada, TikTok has an ad reach of 36% among all adults. If app stores are forced to remove TikTok, it will be a blow to the influencer-advertising industrial complex that drives an increasingly large segment of the two economies.
There are also fears a ban will infringe on free speech rights, including the capacity for journalists to do their job and reach eyeballs. In 2022, 67% of US teens aged 13 to 17 used TikTok. In Canada, 14% of Canadians who used the internet were on TikTok, including 53% of connected 18-24-year-olds – which is the vast majority of them.
Meanwhile, there’s consternation that a ban would undermine US criticisms of foreign states, particularly authoritarian ones, for their censorship regimes. Some say an American ban would embolden authoritarians who would be keen to use the ban as justification for invoking or extending their crackdowns.
Big Tech could grow
A forced TikTok sale could also invite its own set of problems. Only so many entities are capable of purchasing a tech behemoth – Meta, Apple, and Alphabet. But if they hoovered up a competitor, there would be concerns about further entrenching the companies and inviting even more anti-competitive behavior among oligopolists. Also lost in the TikTok handwringing: Domestic tech companies pose their own surveillance and mis- or disinformation challenges to democracy and cohesion.
There are a lot of “ifs” between the bill passed by the House and a TikTok ban. The Senate isn’t in a rush to vote on it – doing so could take months – and if it does pass, it will almost certainly face a long series of court battles. If all of that happens and the law survives, ByteDance could in theory sell TikTok, but Beijing has said it would oppose a forced sale.
Meanwhile, there’s next to no chance Ottawa will try to force ByteDance to divest from TikTok or ban it if the US doesn’t move first. Doing so would just invite TikTok to bounce from Canada and its comparatively small market.
What about … elections?
The political consequences of a ban wouldn’t necessarily extend to the 2024 election. If young people are bumped from the money-making app, will they vote with their feet?
Graeme Thompson, a senior global macro-geopolitics analyst at Eurasia Group, is not convinced the move will affect votes. “To the extent that it affects the elections,” he says, “it may be more about communications and how political parties and candidates get their messages out on social media.”
But with young voters already souring on Biden over issues like Gaza, some congressional Democrats warn that moving forward with a ban could seriously hurt the president at the ballot box. Besides, even as the White House raises security concerns about TikTok, the Biden campaign is still using the app to reach voters.
Canada’s threatened tax on tech giants risks trade war
Canadian Finance Minister Chrystia Freeland plans to unveil the federal budget on April 16, a release that will be keenly watched north and south of the border. Big Tech companies, in particular, will be looking for clues about when Canada will implement its long-promised digital services tax.
Justin Trudeau’s cash-strapped Liberal government hopes to raise up to $2.5 billion over five years by imposing a 3% tax on companies like Alphabet, Meta, Uber, Amazon, and Airbnb. First promised in the 2021 budget, the Trudeau government said it would implement the tax on Jan. 1, 2024, retroactive to 2022.
Aside from raising much-needed funds, targeting tech giants has the additional benefit for Trudeau of being popular politically. His government has already whacked Alphabet and Meta with its Online News Act, forcing them to share revenues with Canadian news publishers (Meta responded by removing news links from Facebook in Canada), and its Online Harms bill, which compels social media platforms to regulate harmful content or face punitive fines.
Freeland says the digital tax is a “matter of fairness,” given that tech giants have been booking their profits in low-tax jurisdictions.
A move by OECD countries to implement a global minimum corporate tax rate of 15% has gained traction, but US opposition persuaded a majority to vote for a year-long delay last summer. Freeland said she preferred a multilateral approach but that Canada is prepared to move forward alone.
US trade representative Katherine Tai has warned that the Biden administration considers the tax discriminatory and will retaliate with tariffs.
A letter from Senate Finance Committee chair Ron Wyden (D-Ore.) and Ranking Member Mike Crapo (R-Idaho) in October said that any retaliatory steps would have bipartisan support.
Those threats seem to have registered with Freeland. In her fall economic statement, she removed the Jan. 1 deadline, while introducing legislation that would allow the federal government to implement the tax later. The budget may indicate whether Canada still plans to go it alone and risk Washington’s wrath, or wait for a new multilateral effort.
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
Exclusive Poll: AI rules wanted, but can you trust the digital cops?
A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.
Governments sniff around Microsoft’s OpenAI deal
The PC giant says that $13 billion hasn’t bought it functional control over the ChatGPT parent company because OpenAI is technically run as a nonprofit. Instead of receiving equity in the company, Microsoft gets about half of OpenAI’s revenue until its investment is repaid.
But the power of the board has been in the spotlight in recent weeks. OpenAI’s nonprofit board fired Sam Altman, CEO of the for-profit arm of the business, but that decision was reversed after a pressure campaign from Altman, employees, and Microsoft. After days of public turmoil, Altman was reinstated, board members resigned, and Microsoft — which never had a seat on the board — gained a non-voting, observer seat. If anything, Microsoft gained more power out of the ordeal.
Some experts say the FTC has authority here, even though Microsoft is simply invested in OpenAI and didn’t buy it outright. “The Clayton Antitrust Act and the FTC Act – the laws that the FTC can enforce – aren’t limited to scrutinizing outright mergers,” says Mitch Stoltz, the antitrust and competition lead for the Electronic Frontier Foundation. “They also cover acquisitions of any amount of capital in a competitor if the effect is ‘substantially to lessen competition, or tend to create a monopoly.’”
Microsoft’s partial ownership could “soften” competition between the two firms, according to Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. “That includes influencing decision-making through voting rights on either board or by sharing sensitive information between the two firms,” she says. “I think the bigger picture here is that AI is viewed as an important technology and, therefore, competitive markets for AI are vital.”
Britain’s antitrust regulator has power here too. “If US companies do business in another country, then the antitrust and competition laws of that country apply,” says Moss, formerly the president of the American Antitrust Institute.
Antitrust is having a moment as a tool for wrangling Big Tech. In recent years, regulators have strived to block and undo anti-competitive mergers – they succeeded in the case of Meta’s purchase of Giphy and failed in the case of Microsoft’s purchase of Activision. But they’ve also sued tech firms over alleged abuses of monopoly power: Federal prosecutors are litigating ongoing antitrust cases against Amazon, Google, and Meta after years of treating these companies with kid gloves.
These antitrust probes are still preliminary, but they represent the first real AI-related legal challenges at a time when there’s a global appetite to stop big tech companies from getting unfair advantages in emerging markets. Next steps, should regulators decide to move forward, would be formal investigations.
EU lawmakers make AI history
It took two years — long enough to earn a Master's degree — but Europe’s landmark AI Act is finally nearing completion. Debates raged last week, but EU lawmakers on Friday reached a provisional agreement on the scope of Europe’s effort to rein in artificial intelligence.
The new rules will follow a two-tiered approach. They will require transparency from general-purpose AI models and impose more stringent safety measures on riskier ones. Generative AI models like OpenAI’s GPT-4 would fall into the former camp and be required to disclose basic information about how the models are trained. But folks in Brussels have also seen "The Terminator," so models deemed a higher risk will have to submit to regular safety tests, disclose any risks, take stringent cybersecurity precautions, and report their energy consumption.
According to Thierry Breton, the EU’s industrial affairs chief, Europe just set itself up as "a pioneer" and "global standard-setter," noting that the act will be a launchpad for EU startups and researchers, granting the bloc a “first-mover advantage” in shaping global AI policy.
Mia Hoffmann, a research fellow at Georgetown University’s Center for Security and Emerging Technology, believes the AI Act will “become something of a global regulatory benchmark” similar to GDPR.
Recent sticking points have been over the regulation of large language models, but EU member governments plan to finalize the language in the coming months. Hoffmann says that while she expects it to be adopted soon, “with the speed of innovation, the AI Act's formal adoption in the spring of 2024 can seem ages away.”
Canada averts a Google news block, US bills in the works
The act, which is modeled on Australian legislation, led Google to threaten to de-index news from its search engine. In protest of the law, Meta, the parent company of Facebook and Instagram, blocked links to Canadian news in the country on both platforms. It’s currently holding out on a deal as Heritage Minister Pascale St-Onge tries to get the company back to the bargaining table.
The Online News Act kerfuffle is a symptom of a bigger issue: the power of governments to regulate large tech firms – a fight that is playing out in Canada, the US, and around the world. California is considering a law similar to Australia's and Canada’s. The bill passed the Assembly but is now on hold in the state senate until 2024. In March, a bipartisan group of lawmakers, led by Sens. Mike Lee and Amy Klobuchar, introduced a similar bill in the Senate, casting it as an anti-trust, pro-competition measure. Meta has made similar threats to pull news in response to the US push to mirror the Australian and Canadian laws.
Tech giants are resisting attempts to extract funds from them to support news media, a tactic that is part of a broader strategy to oppose regulation. But the Australian and Canadian successes may encourage California, the US Congress, and other states to move forward with similar efforts. The coming months will be a test of whether governments are able – and willing – to regulate these powerful companies. All eyes should be on the progress, or not, of the California and Congressional bills along with Canada’s negotiations with Meta since these cases will help decide the future of tech regulation itself.