We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Canada’s threatened tax on tech giants risks trade war
Canadian Finance Minister Chrystia Freeland plans to unveil the federal budget on April 16, a release that will be keenly watched north and south of the border. Big Tech companies, in particular, will be looking for clues about when Canada will implement its long-promised digital services tax.
Justin Trudeau’s cash-strapped Liberal government hopes to raise up to $2.5 billion over five years by imposing a 3% tax on companies like Alphabet, Meta, Uber, Amazon, and Airbnb. First promised in the 2021 budget, the Trudeau government said it would implement the tax on Jan. 1, 2024, retroactive to 2022.
Aside from raising much-needed funds, targeting tech giants has the additional benefit for Trudeau of being popular politically. His government has already whacked Alphabet and Meta with its Online News Act, forcing them to share revenues with Canadian news publishers (Meta responded by removing news links from Facebook in Canada), and its Online Harms bill, which compels social media platforms to regulate harmful content or face punitive fines.
Freeland says the digital tax is a “matter of fairness,” given that tech giants have been booking their profits in low-tax jurisdictions.
A move by OECD countries to implement a global minimum corporate tax rate of 15% has gained traction, but US opposition persuaded a majority to vote for a year-long delay last summer. Freeland said she preferred a multilateral approach but that Canada is prepared to move forward alone.
US trade representative Katherine Tai has warned that the Biden administration considers the tax discriminatory and will retaliate with tariffs.
A letter from Senate Finance Committee chair Ron Wyden (D-Ore.) and Ranking Member Mike Crapo (R-Idaho) in October said that any retaliatory steps would have bipartisan support.
Those threats seem to have registered with Freeland. In her fall economic statement, she removed the Jan. 1 deadline, while introducing legislation that would allow the federal government to implement the tax later. The budget may indicate whether Canada still plans to go it alone and risk Washington’s wrath, or wait for a new multilateral effort.
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
Exclusive Poll: AI rules wanted, but can you trust the digital cops?
A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.
Governments sniff around Microsoft’s OpenAI deal
The PC giant says that $13 billion hasn’t bought it functional control over the ChatGPT parent company because OpenAI is technically run as a nonprofit. Instead of receiving equity in the company, Microsoft gets about half of OpenAI’s revenue until its investment is repaid.
But the power of the board has been in the spotlight in recent weeks. OpenAI’s nonprofit board fired Sam Altman, CEO of the for-profit arm of the business, but that decision was reversed after a pressure campaign from Altman, employees, and Microsoft. After days of public turmoil, Altman was reinstated, board members resigned, and Microsoft — which never had a seat on the board — gained a non-voting, observer seat. If anything, Microsoft gained more power out of the ordeal.
Some experts say the FTC has authority here, even though Microsoft is simply invested in OpenAI and didn’t buy it outright. “The Clayton Antitrust Act and the FTC Act – the laws that the FTC can enforce – aren’t limited to scrutinizing outright mergers,” says Mitch Stoltz, the antitrust and competition lead for the Electronic Frontier Foundation. “They also cover acquisitions of any amount of capital in a competitor if the effect is ‘substantially to lessen competition, or tend to create a monopoly.’”
Microsoft’s partial ownership could “soften” competition between the two firms, according to Diana Moss, vice president and director of competition policy at the Progressive Policy Institute. “That includes influencing decision-making through voting rights on either board or by sharing sensitive information between the two firms,” she says. “I think the bigger picture here is that AI is viewed as an important technology and, therefore, competitive markets for AI are vital.”
Britain’s antitrust regulator has power here too. “If US companies do business in another country, then the antitrust and competition laws of that country apply,” says Moss, formerly the president of the American Antitrust Institute.
Antitrust is having a moment as a tool for wrangling Big Tech. In recent years, regulators have strived to block and undo anti-competitive mergers – they succeeded in the case of Meta’s purchase of Giphy and failed in the case of Microsoft’s purchase of Activision. But they’ve also sued tech firms over alleged abuses of monopoly power: Federal prosecutors are litigating ongoing antitrust cases against Amazon, Google, and Meta after years of treating these companies with kid gloves.
These antitrust probes are still preliminary, but they represent the first real AI-related legal challenges at a time when there’s a global appetite to stop big tech companies from getting unfair advantages in emerging markets. Next steps, should regulators decide to move forward, would be formal investigations.
EU lawmakers make AI history
It took two years — long enough to earn a Master's degree — but Europe’s landmark AI Act is finally nearing completion. Debates raged last week, but EU lawmakers on Friday reached a provisional agreement on the scope of Europe’s effort to rein in artificial intelligence.
The new rules will follow a two-tiered approach. They will require transparency from general-purpose AI models and impose more stringent safety measures on riskier ones. Generative AI models like OpenAI’s GPT-4 would fall into the former camp and be required to disclose basic information about how the models are trained. But folks in Brussels have also seen "The Terminator," so models deemed a higher risk will have to submit to regular safety tests, disclose any risks, take stringent cybersecurity precautions, and report their energy consumption.
According to Thierry Breton, the EU’s industrial affairs chief, Europe just set itself up as "a pioneer" and "global standard-setter," noting that the act will be a launchpad for EU startups and researchers, granting the bloc a “first-mover advantage” in shaping global AI policy.
Mia Hoffmann, a research fellow at Georgetown University’s Center for Security and Emerging Technology, believes the AI Act will “become something of a global regulatory benchmark” similar to GDPR.
Recent sticking points have been over the regulation of large language models, but EU member governments plan to finalize the language in the coming months. Hoffmann says that while she expects it to be adopted soon, “with the speed of innovation, the AI Act's formal adoption in the spring of 2024 can seem ages away.”
Canada averts a Google news block, US bills in the works
The act, which is modeled on Australian legislation, led Google to threaten to de-index news from its search engine. In protest of the law, Meta, the parent company of Facebook and Instagram, blocked links to Canadian news in the country on both platforms. It’s currently holding out on a deal as Heritage Minister Pascale St-Onge tries to get the company back to the bargaining table.
The Online News Act kerfuffle is a symptom of a bigger issue: the power of governments to regulate large tech firms – a fight that is playing out in Canada, the US, and around the world. California is considering a law similar to Australia's and Canada’s. The bill passed the Assembly but is now on hold in the state senate until 2024. In March, a bipartisan group of lawmakers, led by Sens. Mike Lee and Amy Klobuchar, introduced a similar bill in the Senate, casting it as an anti-trust, pro-competition measure. Meta has made similar threats to pull news in response to the US push to mirror the Australian and Canadian laws.
Tech giants are resisting attempts to extract funds from them to support news media, a tactic that is part of a broader strategy to oppose regulation. But the Australian and Canadian successes may encourage California, the US Congress, and other states to move forward with similar efforts. The coming months will be a test of whether governments are able – and willing – to regulate these powerful companies. All eyes should be on the progress, or not, of the California and Congressional bills along with Canada’s negotiations with Meta since these cases will help decide the future of tech regulation itself.
Google throws Trudeau a lifeline
Canada’s Online News Act, introduced last summer to force revenue-sharing on tech giants, backfired badly when Meta decided to block Canadian news outlets from their platforms rather than pay up.
Bill C-18 and the tech giants’ response to it spelled trouble for a media industry already in crisis – traffic and revenue plummeted. It was bad news for PM Justin Trudeau, whose revenue-sharing law was intended to improve things for media outlets, not make things worse, and it opened him to criticism that he was incompetently wrecking an industry he was trying to help.
But this week brought a turn in fortune. Canada reached a deal with Google that will see the tech giant compensate Canadian news outlets for linking to their stories. The deal, which requires Alphabet to pay between $100 million and $172 million a year, is a huge relief to Trudeau after months of withering criticism.
Facebook and Instagram are still blocking news links from Canadian publishers, and there is no indication that Meta wants a deal under any circumstances.
Google and Meta were undoubtedly nervous about an open-ended requirement to pay what could set a pricey precedent for them in other jurisdictions. However, according to the CBC, Ottawa will introduce regulations allowing Google to negotiate with a group representing all media organizations, thereby limiting its arbitration risk. Similar laws are being considered in Washington state and California.
The money in question – well below $200 million – is not huge considering that Google has about half of all of the $14-billion digital advertising revenue in Canada in 2022.
Reaction in Canada is mixed. The deal comes as a relief, but it will not save the industry. In both Canada and the United States, the rise of digital advertising has bled revenue from traditional media, leading to job losses and growing news deserts, or areas without a local newspaper. The Trudeau government responded with this bill and with government subsidies: The recent fall economic statement included $129 million, for example, for news organizations through a tax credit for up to $29,750 per journalist.
But such government backing may come at a price. Increasingly, Conservatives are warning that news organizations will be motivated to support the Liberal government in order to keep the money flowing.
Ask ChatGPT: What will Sam Altman achieve for Microsoft?
On Friday, the tech world was abuzz with the news that Sam Altman, the 38-year-old co-founder of OpenAI, had been pink-slipped by the firm’s board of directors after a hastily called Google Meet. OpenAI’s other co-founder, Greg Brockman, also decided to leave the company after the board demoted him in the same meeting. By late Sunday, they both had new jobs.
According to insiders, Altman had been moving “too fast” in the development of new AI technology. Board members were reportedly concerned about OpenAI’s recent developer conference and the announcement of a means for anyone to create their own versions of ChatGPT. Ilya Sutskever, a key researcher and board member who was also one of the co-founders of OpenAI, was reportedly concerned about the dangers posed by OpenAI’s technology and believed Altman was downplaying that risk. The board was also apparently uncomfortable with Altman’s attempt to raise $100 billion from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company.
Altman’s firing was only possible because of the unique corporate structure of OpenAI. Despite being a co-founder, Altman had no equity in the company. The company’s board controls OpenAI’s 501(c)(3) charity, OpenAI Inc., which was established via a charter to “ensure that safe artificial general intelligence is developed and benefits all of humanity.” That charter takes “precedence over any obligation to generate a profit.”
Altman did not take it lying down. On Saturday night, he tweeted “i love the openai team so much.” Hundreds of employees, including interim CEO Mira Murati and COO Brad Lightcap, liked or reposted the tweet within the hour. Over the weekend, investors also rallied behind Altman, including Thrive Capital, Tiger Global, Khosla Ventures, and Sequoia Capital. A plan to sell as much as $1 billion in employee stock now hangs in the balance; Thrive Capital was set to lead that tender offer and to value OpenAI at $86bn.
Despite the pressure, OpenAI’s board chose not to reinstate Altman – they refused to meet his demands of there being a new board and governance structure – and announced Sunday evening that Emmett Shear, former chief executive of Twitch, will replace him as CEO. Shear faces a tough job, given that so many OpenAI staffers had threatened to quit unless Altman returned.
But some of them may have a landing pad: Microsoft CEO Satya Nadella posted late Sunday on X that Altman, Brockman, and their team will be joining Microsoft to lead a “new advanced AI research team.”