We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The UK is plotting to regulate AI
Policy officials in the Department for Science, Innovation and Technology have begun drafting legislation to rein in the most potent dangers from AI, sources told Bloomberg News this week. While Europe has set the standard by passing its comprehensive AI Act, Sunak has pledged to take a more hands-off approach to the technology. It’s unclear how far the forthcoming bill, which is still in its early stages, will go in setting up safeguards. Separately, the Department for Culture, Media and Sport has also proposed amending the country’s copyright law to allow companies to “opt out” of having their content scraped by generative AI firms.
Exclusive Poll: AI rules wanted, but can you trust the digital cops?
A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.
Taylor Swift controversy sparks new porn bill
After nonconsensual deepfake porn of pop singer Taylor Swift bounced around the internet in recent weeks, US lawmakers have proposed a fix.
The Disrupt Explicit Forged Images and Non-Consensual Edits Act, introduced by Democratic Sen. Dick Durbin with Republican cosponsors, would give victims of this digital abuse the right to sue for damages from anyone who “knowingly produced or possessed the digital forgery with intent to disclose it.”
Swift has reportedly considered taking legal action in light of the new images. Microsoft, meanwhile, has taken steps in response to the incident to close loopholes in its software that allowed users to make such images.The bill has bipartisan support in the Senate, but squeezing it through a legislative agenda crowded with bills on government funding, border security and Ukraine aid, there’s no clear path to a swift passage.
Biden plays big brother for AI
President Joe Biden is preparing to issue new rules to compel technology companies to inform the government when they begin building powerful artificial intelligence models.
The rules are the result of a monthslong process that began with Biden’s executive order on AI in October. Under the rules, companies will have to disclose the computing power of their models (if they exceed a certain number of FLOPs, a unit of measuring compute), who owns the training data it’s being fed, and how the developer is conducting safety testing.
Biden is using his authority under the Defense Production Act, a sweeping set of powers for the president that, he believes, gives him the authority to rein in the most powerful AI models that could pose a threat to safety or national security if not monitored closely.
Will the Supreme Court throw government regulation overboard?
The Supreme Court heard arguments in a case that could strip government agencies of their power to regulate industries.
The case was brought by fishermen in New Jersey and Rhode Island who contested a federal regulation that attempted to stop overfishing by requiring commercial fishermen to pay roughly $700 per day for federal monitors of their vessels.
The regulation has since been suspended and its costs refunded, but two conservative legal groups have brought the case to the Supreme Court because it is perfect for attacking the Chevron doctrine.
What is that?: The doctrine, which takes its name from the 1984 Chevron v. Natural Resources Defense Council decision, requires courts to defer to agencies’ reasonable interpretations of laws. This is the legal basis of most government regulation. It is also one of the most cited cases in American law, but this court has proven it isn’t afraid to overturn precedent.
Taking sides?: Conservative justices showed skepticism about Chevron, arguing that it allows agencies to change tack on regulation with each new administration. But Liberal justices worry that overturning it could mean that the courts and Congress – as opposed to exports and career professionals in federal agencies – will become the regulators of everything from AI to pharmaceuticals.
Why it matters: If Chevron is negated, it would give more power to the courts while ushering in a period of deregulation and a deluge of litigation from companies filing for their regulations to be overturned. Decades of regulation involving the environment, the stock market, on-the-job safety, health care, consumer safety, and guns are all expected to be affected if Chevron is thrown overboard.
Governments sniff around Microsoft’s OpenAI deal
The PC giant says that $13 billion hasn’t bought it functional control over the ChatGPT parent company because OpenAI is technically run as a nonprofit. Instead of receiving equity in the company, Microsoft gets about half of OpenAI’s revenue until its investment is repaid.
But the power of the board has been in the spotlight in recent weeks. OpenAI’s nonprofit board fired Sam Altman, CEO of the for-profit arm of the business, but that decision was reversed after a pressure campaign from Altman, employees, and Microsoft. After days of public turmoil, Altman was reinstated, board members resigned, and Microsoft — which never had a seat on the board — gained a non-voting, observer seat. If anything, Microsoft gained more power out of the ordeal.
Some experts say the FTC has authority here, even though Microsoft is simply invested in OpenAI and didn’t buy it outright. “The Clayton Antitrust Act and the FTC Act – the laws that the FTC can enforce – aren’t limited to scrutinizing outright mergers,” says Mitch Stoltz, the antitrust and competition lead for the Electronic Frontier Foundation. “They also cover acquisitions of any amount of capital in a competitor if the effect is ‘substantially to lessen competition, or tend to create a monopoly.’”
Microsoft’s partial ownership could “soften” competition between the two firms, according to Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. “That includes influencing decision-making through voting rights on either board or by sharing sensitive information between the two firms,” she says. “I think the bigger picture here is that AI is viewed as an important technology and, therefore, competitive markets for AI are vital.”
Britain’s antitrust regulator has power here too. “If US companies do business in another country, then the antitrust and competition laws of that country apply,” says Moss, formerly the president of the American Antitrust Institute.
Antitrust is having a moment as a tool for wrangling Big Tech. In recent years, regulators have strived to block and undo anti-competitive mergers – they succeeded in the case of Meta’s purchase of Giphy and failed in the case of Microsoft’s purchase of Activision. But they’ve also sued tech firms over alleged abuses of monopoly power: Federal prosecutors are litigating ongoing antitrust cases against Amazon, Google, and Meta after years of treating these companies with kid gloves.
These antitrust probes are still preliminary, but they represent the first real AI-related legal challenges at a time when there’s a global appetite to stop big tech companies from getting unfair advantages in emerging markets. Next steps, should regulators decide to move forward, would be formal investigations.
Should India roll the dice on AI regulation?
The United Kingdom and Japan recently hosted influential AI summits, spurring global conversations about regulatory solutions. Now, India wants in on the act, and it is set to host a major conference next month aimed at boosting trust in and adoption of artificial intelligence. But as concerns over the safety of AI grow, New Delhi faces a choice between taking a leading role in the growing international consensus on AI regulation and striking out on its own to nurture innovation with light regulatory touches.
India’s government has flip-flopped on the issue. In April, it said it would not regulate AI at all, giving entrepreneurs the leeway they need to build up a world-leading innovation environment. But just two months later, the Ministry of Electronics and Information Technology said India would roll out broad rules after all through the Digital India Act, a major overhaul of decades-old laws governing the tech sector that is still being drafted.
You can see the temptation for India to give the market free reign: AI is expected to add nearly a trillion dollars to India’s GDP by 2035. Its $23 billion market for the semiconductor chips that power AI is expected to nearly quadruple by 2028. The country also has a thriving tech startup culture – 80,000 firms bloomed between 2012 and 2020 – and world-class engineering schools, including the infamously competitive and rigorous Indian Institute of Technology system. Major domestic players like Tata and Reliance have attracted investment and partnerships with NVIDIA, the world’s foremost semiconductor designer, eager to build up new markets to replace Chinese business lost to US export controls.
So why shouldn’t India press its advantage? Well, it’s not as if New Delhi is immune to the disruptions and dangers AI potentially poses. The same concerns about malicious actors using the technology to spread disinformation or conduct cyberattacks apply to India, and being the odd country out when even China is joining efforts to set global rules of the road may not be the best look. We’ll get a better sense of India’s preferred policy direction when they host the annual summit of the OECD’s Global Partnership on Artificial Intelligence on Dec. 12-14.Did the US steal the UK’s AI thunder?
By many accounts, his summit was a success. Sunak brought together an impressive group of world leaders — US Vice President Kamala Harris, UN Secretary-General António Guterres, European Commission President Ursula von der Leyen, and even China’s Vice Minister of Science and Technology Wu Zhaohui. Industry leaders such as Tesla chief Elon Musk, OpenAI CEO Sam Altman, and Microsoft President Brad Smith also attended.
The summit’s big achievement? The Bletchley Declaration is a 28-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In other words, it’s a promise to use the technology for good and not evil. Experts say Sunak’s ability to get China on board was particularly laudable, but the agreement itself is more of a statement of intent than anything with teeth.
Sunak may have earned plaudits from his star-studded summit, but one of the key no-shows stole some of his thunder from abroad. On Oct. 30, President Joe Biden signed a sweeping executive order on AI – or as sweeping as an executive order can be. Biden cannot unilaterally make new law — that’s the job of Congress — but he can direct many of the government’s departments and agencies to act under existing statutes.
What’s in the US plan? Biden’s order is filled with requests for new studies, reports, and recommendations. It involves six departments, including Justice and Homeland Security, charged with tackling AI-related issues related to civil rights and critical infrastructure, respectively. It also impacts agencies like the National Institute of Standards and Technology, which it tasks with developing watermarking standards for generative AI.
Invoking the Defense Production Act, Biden ordered AI companies working on advanced models to notify the federal government and clue them in to their ongoing safety testing. Many top AI companies already agreed to this earlier this summer, but the executive order codified these previously voluntary transparency requirements.
Dev Saxena, director of Eurasia Group’s geo‑technology practice, called the order “extremely comprehensive” and “ambitious,” noting that it could influence regulators around the world. “Given US leadership in this emerging technology, and the first-mover role it could play in global governance, principles, and tactics used in the executive order could spill over globally,” Saxena said.
Adam Conner, vice president for technology policy at the Center for American Progress, a liberal think tank, called the order an “important first step” and was pleased it included "real accountability for federal government use of AI.” However, he lamented that it “stops short of prohibiting … really harmful uses of AI in things like federal law enforcement” and that it failed to go beyond setting minimum safety standards.
But some think it went too far. Brent Skorup, a senior research fellow at George Mason University’s Mercatus Center, a libertarian think tank, thinks the order contains some “very troubling assertions of government oversight” concerning AI. “Many modest-sized and open-source companies are going to have government scrutiny they likely never anticipated,” he said. However, like Conner, Skorup was heartened that it included “a prominent call to agencies to protect citizens’ privacy and civil liberties” when the government uses AI, which he said is “an issue that has gotten very little focus to date.”
While Sunak’s summit grabbed headlines last week, Biden’s order gave people something more to pore over. That could be a problem for Sunak, who is fighting for his political career and was counting on the AI summit to help position him as a global leader in AI.
“The prime minister’s conservative government is staring down some truly dire polling numbers and an election that has to be called by January 2025,” said Conner. “Sunak needed this AI summit to project success at home to help his political fortunes.”
“Downing Street gets due credit for pulling off a great event,” Saxena said, “but separately, the White House has laid out the most detailed set of policy prescriptions on AI, at the executive level, in US history.”
Art: Courtesy of Midjourney