We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
A general view of the U.S. Federal Trade Commission (FTC) building, in Washington, D.C., on Wednesday, October 20, 2021
Antitrust is coming for AI
The US government's two antitrust regulators struck a deal to divvy up major investigations into anti-competitive behavior in the AI industry. The Justice Department will look into Nvidia’s dominance over the chip market, while the Federal Trade Commission will investigate OpenAI and its lead investor, Microsoft.
In December, the FTC opened a preliminary inquiry into Microsoft's $13 billion stake in OpenAI, which makes ChatGPT. It’s an non-traditional deal, in which Microsoft receives half of OpenAI’s revenue until the investment is repaid, rather than traditional equity. But Microsoft also flexed its muscles after the sudden ouster of OpenAI CEO Sam Altman last year, offering to hire him and any defecting OpenAI employees, effectively pressuring the company to rehire him — which it did soon after. The UK’s Competition and Markets Authority also began probing the relationship between the two firms in December.
Meanwhile, Nvidia has become the undisputed leader of the AI chip industry with their powerful graphics processors powering the training and operation of generative AI models. The company recently disclosed in a filing with the US Securities and Exchange Commission that its pole position and market dominance has attracted regulatory scrutiny from the United Kingdom, though it didn’t specify the nature of the inquiry.
Noah Daponte-Smith, a United States analyst for Eurasia Group, sees this announcement “largely as a messaging exercise intended to show that DOJ [and] FTC will be just as dogged on antitrust issues in the AI space as in the rest of the Big Tech arena.” He sees the decision as more of a continuation of Biden’s aggressive antitrust regime than a policy position on the regulation of AI.
“My sense is that AI regulation will have to occur more through Congress and through executive actions not focused on competition,” he added.
British Prime Minister Rishi Sunak speaks during a news conference at the AI Safety Summit in Milton Keynes, near London, last November.
The UK is plotting to regulate AI
Policy officials in the Department for Science, Innovation and Technology have begun drafting legislation to rein in the most potent dangers from AI, sources told Bloomberg News this week. While Europe has set the standard by passing its comprehensive AI Act, Sunak has pledged to take a more hands-off approach to the technology. It’s unclear how far the forthcoming bill, which is still in its early stages, will go in setting up safeguards. Separately, the Department for Culture, Media and Sport has also proposed amending the country’s copyright law to allow companies to “opt out” of having their content scraped by generative AI firms.
Exclusive Poll: AI rules wanted, but can you trust the digital cops?
A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.
FILE PHOTO: Taylor Swift attends a premiere for Taylor Swift: The Eras Tour in Los Angeles, California, U.S., October 11, 2023.
Taylor Swift controversy sparks new porn bill
After nonconsensual deepfake porn of pop singer Taylor Swift bounced around the internet in recent weeks, US lawmakers have proposed a fix.
The Disrupt Explicit Forged Images and Non-Consensual Edits Act, introduced by Democratic Sen. Dick Durbin with Republican cosponsors, would give victims of this digital abuse the right to sue for damages from anyone who “knowingly produced or possessed the digital forgery with intent to disclose it.”
Swift has reportedly considered taking legal action in light of the new images. Microsoft, meanwhile, has taken steps in response to the incident to close loopholes in its software that allowed users to make such images.The bill has bipartisan support in the Senate, but squeezing it through a legislative agenda crowded with bills on government funding, border security and Ukraine aid, there’s no clear path to a swift passage.
FILE PHOTO: U.S. President Joe Biden walks across the stage to sign an Executive Order about Artificial Intelligence in the East Room at the White House in Washington, U.S., October 30, 2023.
Biden plays big brother for AI
President Joe Biden is preparing to issue new rules to compel technology companies to inform the government when they begin building powerful artificial intelligence models.
The rules are the result of a monthslong process that began with Biden’s executive order on AI in October. Under the rules, companies will have to disclose the computing power of their models (if they exceed a certain number of FLOPs, a unit of measuring compute), who owns the training data it’s being fed, and how the developer is conducting safety testing.
Biden is using his authority under the Defense Production Act, a sweeping set of powers for the president that, he believes, gives him the authority to rein in the most powerful AI models that could pose a threat to safety or national security if not monitored closely.
The U.S. Supreme Court building in Washington, D.C. is seen from snow-covered U.S. Capitol grounds on January 17, 2024.
Will the Supreme Court throw government regulation overboard?
The Supreme Court heard arguments in a case that could strip government agencies of their power to regulate industries.
The case was brought by fishermen in New Jersey and Rhode Island who contested a federal regulation that attempted to stop overfishing by requiring commercial fishermen to pay roughly $700 per day for federal monitors of their vessels.
The regulation has since been suspended and its costs refunded, but two conservative legal groups have brought the case to the Supreme Court because it is perfect for attacking the Chevron doctrine.
What is that?: The doctrine, which takes its name from the 1984 Chevron v. Natural Resources Defense Council decision, requires courts to defer to agencies’ reasonable interpretations of laws. This is the legal basis of most government regulation. It is also one of the most cited cases in American law, but this court has proven it isn’t afraid to overturn precedent.
Taking sides?: Conservative justices showed skepticism about Chevron, arguing that it allows agencies to change tack on regulation with each new administration. But Liberal justices worry that overturning it could mean that the courts and Congress – as opposed to exports and career professionals in federal agencies – will become the regulators of everything from AI to pharmaceuticals.
Why it matters: If Chevron is negated, it would give more power to the courts while ushering in a period of deregulation and a deluge of litigation from companies filing for their regulations to be overturned. Decades of regulation involving the environment, the stock market, on-the-job safety, health care, consumer safety, and guns are all expected to be affected if Chevron is thrown overboard.
A Microsoft sign at the tech giant's offices in Issy-les-Moulineaux, near Paris.
Governments sniff around Microsoft’s OpenAI deal
The PC giant says that $13 billion hasn’t bought it functional control over the ChatGPT parent company because OpenAI is technically run as a nonprofit. Instead of receiving equity in the company, Microsoft gets about half of OpenAI’s revenue until its investment is repaid.
But the power of the board has been in the spotlight in recent weeks. OpenAI’s nonprofit board fired Sam Altman, CEO of the for-profit arm of the business, but that decision was reversed after a pressure campaign from Altman, employees, and Microsoft. After days of public turmoil, Altman was reinstated, board members resigned, and Microsoft — which never had a seat on the board — gained a non-voting, observer seat. If anything, Microsoft gained more power out of the ordeal.
Some experts say the FTC has authority here, even though Microsoft is simply invested in OpenAI and didn’t buy it outright. “The Clayton Antitrust Act and the FTC Act – the laws that the FTC can enforce – aren’t limited to scrutinizing outright mergers,” says Mitch Stoltz, the antitrust and competition lead for the Electronic Frontier Foundation. “They also cover acquisitions of any amount of capital in a competitor if the effect is ‘substantially to lessen competition, or tend to create a monopoly.’”
Microsoft’s partial ownership could “soften” competition between the two firms, according to Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. “That includes influencing decision-making through voting rights on either board or by sharing sensitive information between the two firms,” she says. “I think the bigger picture here is that AI is viewed as an important technology and, therefore, competitive markets for AI are vital.”
Britain’s antitrust regulator has power here too. “If US companies do business in another country, then the antitrust and competition laws of that country apply,” says Moss, formerly the president of the American Antitrust Institute.
Antitrust is having a moment as a tool for wrangling Big Tech. In recent years, regulators have strived to block and undo anti-competitive mergers – they succeeded in the case of Meta’s purchase of Giphy and failed in the case of Microsoft’s purchase of Activision. But they’ve also sued tech firms over alleged abuses of monopoly power: Federal prosecutors are litigating ongoing antitrust cases against Amazon, Google, and Meta after years of treating these companies with kid gloves.
These antitrust probes are still preliminary, but they represent the first real AI-related legal challenges at a time when there’s a global appetite to stop big tech companies from getting unfair advantages in emerging markets. Next steps, should regulators decide to move forward, would be formal investigations.
India's Prime Minister Narendra Modi arrives at the Bharat Mandapam to inaugurate the Indian Mobile Congress 2023, in New Delhi, India on Oct. 27, 2023.
Should India roll the dice on AI regulation?
The United Kingdom and Japan recently hosted influential AI summits, spurring global conversations about regulatory solutions. Now, India wants in on the act, and it is set to host a major conference next month aimed at boosting trust in and adoption of artificial intelligence. But as concerns over the safety of AI grow, New Delhi faces a choice between taking a leading role in the growing international consensus on AI regulation and striking out on its own to nurture innovation with light regulatory touches.
India’s government has flip-flopped on the issue. In April, it said it would not regulate AI at all, giving entrepreneurs the leeway they need to build up a world-leading innovation environment. But just two months later, the Ministry of Electronics and Information Technology said India would roll out broad rules after all through the Digital India Act, a major overhaul of decades-old laws governing the tech sector that is still being drafted.
You can see the temptation for India to give the market free reign: AI is expected to add nearly a trillion dollars to India’s GDP by 2035. Its $23 billion market for the semiconductor chips that power AI is expected to nearly quadruple by 2028. The country also has a thriving tech startup culture – 80,000 firms bloomed between 2012 and 2020 – and world-class engineering schools, including the infamously competitive and rigorous Indian Institute of Technology system. Major domestic players like Tata and Reliance have attracted investment and partnerships with NVIDIA, the world’s foremost semiconductor designer, eager to build up new markets to replace Chinese business lost to US export controls.
So why shouldn’t India press its advantage? Well, it’s not as if New Delhi is immune to the disruptions and dangers AI potentially poses. The same concerns about malicious actors using the technology to spread disinformation or conduct cyberattacks apply to India, and being the odd country out when even China is joining efforts to set global rules of the road may not be the best look. We’ll get a better sense of India’s preferred policy direction when they host the annual summit of the OECD’s Global Partnership on Artificial Intelligence on Dec. 12-14.