Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Apple signs Joe Biden’s pledge
Apple signed on to the Biden administration’s voluntary pledge for artificial intelligence companies on July 26.
President Joe Biden and Vice President Kamala Harrisfirst announced that they secured commitments from seven major AI developers — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — a year ago in what the administration says laid the groundwork for its executive order on AI adopted in October. The voluntary commitments included safety testing, information sharing on safety risks (with government, academia, and civil society groups), cybersecurity investments, watermarking systems AI-generated content, and a general agreement to “develop and deploy advanced AI systems to help address society’s greatest challenges.”
Until now, Apple wasn’t on the list. Now, as Apple prepares to release new AI-enabled iPhones (powered by OpenAI’s systems as well as its own), the Cupertino-based tech giant is playing nice with the Biden administration, signaling that they’ll be a responsible actor, even without formal legislation on the books.
Norway's school phone ban aims to reclaim "stolen focus", says PM Jonas Støre
Sometimes the best ideas are the ones that seem obvious in retrospect. In recent weeks, Norway's government has made a concerted push to ban smartphones and tablets from classrooms nationwide. Norwegian Prime Minister Jonas Støre explains his administration's radical move, which Education Minister Kari Nessa Nordtun has spearheaded, to Ian Bremmer in a wide-ranging conversation on the sidelines of the Munich Security Conference.
Their interview is featured in the latest episode of the show GZERO World on US public television stations nationwide (check local listings). Bremmer and Støre's discussion focuses primarily on Norway's energy transition and NATO, but towards the end of the conversation, they talk about schools and screentime and the remarkable benefits so far.
"We see students have started to play in the breaks [recess]. The girls say, 'We can finally take a shower after the gym. We are not afraid anymore to be photographed.' And there's a completely different level of social interaction."
This move, Støre explains, reflects a broader effort in Norway to prioritize community well-being and address the effects of the digital age on children's development, including declining reading abilities. And it's not just children who benefit from less screen time, he adds, but adults as well. And it's a decision, Støre explains, that other governments across Europe and the world could also do well to implement.
Watch the full interview on GZERO World with Ian Bremmer on public television beginning this Friday, March 1. Check local listings.
EU lawmakers make AI history
It took two years — long enough to earn a Master's degree — but Europe’s landmark AI Act is finally nearing completion. Debates raged last week, but EU lawmakers on Friday reached a provisional agreement on the scope of Europe’s effort to rein in artificial intelligence.
The new rules will follow a two-tiered approach. They will require transparency from general-purpose AI models and impose more stringent safety measures on riskier ones. Generative AI models like OpenAI’s GPT-4 would fall into the former camp and be required to disclose basic information about how the models are trained. But folks in Brussels have also seen "The Terminator," so models deemed a higher risk will have to submit to regular safety tests, disclose any risks, take stringent cybersecurity precautions, and report their energy consumption.
According to Thierry Breton, the EU’s industrial affairs chief, Europe just set itself up as "a pioneer" and "global standard-setter," noting that the act will be a launchpad for EU startups and researchers, granting the bloc a “first-mover advantage” in shaping global AI policy.
Mia Hoffmann, a research fellow at Georgetown University’s Center for Security and Emerging Technology, believes the AI Act will “become something of a global regulatory benchmark” similar to GDPR.
Recent sticking points have been over the regulation of large language models, but EU member governments plan to finalize the language in the coming months. Hoffmann says that while she expects it to be adopted soon, “with the speed of innovation, the AI Act's formal adoption in the spring of 2024 can seem ages away.”
Canada averts a Google news block, US bills in the works
The act, which is modeled on Australian legislation, led Google to threaten to de-index news from its search engine. In protest of the law, Meta, the parent company of Facebook and Instagram, blocked links to Canadian news in the country on both platforms. It’s currently holding out on a deal as Heritage Minister Pascale St-Onge tries to get the company back to the bargaining table.
The Online News Act kerfuffle is a symptom of a bigger issue: the power of governments to regulate large tech firms – a fight that is playing out in Canada, the US, and around the world. California is considering a law similar to Australia's and Canada’s. The bill passed the Assembly but is now on hold in the state senate until 2024. In March, a bipartisan group of lawmakers, led by Sens. Mike Lee and Amy Klobuchar, introduced a similar bill in the Senate, casting it as an anti-trust, pro-competition measure. Meta has made similar threats to pull news in response to the US push to mirror the Australian and Canadian laws.
Tech giants are resisting attempts to extract funds from them to support news media, a tactic that is part of a broader strategy to oppose regulation. But the Australian and Canadian successes may encourage California, the US Congress, and other states to move forward with similar efforts. The coming months will be a test of whether governments are able – and willing – to regulate these powerful companies. All eyes should be on the progress, or not, of the California and Congressional bills along with Canada’s negotiations with Meta since these cases will help decide the future of tech regulation itself.
Deepfake it till you make it
While deepfake technology is typically associated with deceit – tricking voters and disrupting democracy – this seems like a more innocuous way of influencing global politics. But the Indian government hasn’t embraced this technology: It recently considered drastic action to compel WhatsApp parent company Meta to break the app’s encryption and identify the creators of deepfake videos of politicians. Deepfakes could, in other words, have a tangible impact on the world’s largest democracy.
In the United States, it’s not just politicians who have clashed with AI over this brand of imitation, but celebrities too. According to a report in “Variety,” actress Scarlett Johansson has taken “legal action” against the app Lisa AI, which used a deepfake version of her image and voice in a 22-second ad posted on X. The law may be on Johansson’s side: California has a right of publicity law prohibiting someone’s name, image, or likeness from being used in an advertisement without permission. In their ongoing strike, Hollywood actors have also been bargaining over how the studios can, if at all, use their image rights with regard to AI.
Deepfake technology is only improving, making it ever more difficult to determine when a politician or celebrity is appearing before your eyes – and when it’s just a dupe. In his recent executive order on AI, President Joe Biden called for new standards for watermarking AI-generated media so people know what’s real and what’s computer generated.
That approach – akin to US consumer protections for advertising – has obvious appeal, but it might not be technically foolproof, experts say. What’s more likely is that the US court system will try to apply existing statutes to new technology, only to reveal (possibly glaring) gaps in the laws.
Generative AI and deepfakes have already crept into the 2024 election, including a Republican National Committee ad depicting a dystopian second Biden term. But look closely at the top-left corner of the ad toward the end, and you’ll notice the following disclosure: “Built entirely with AI imagery.” Surely, this won’t be the last we see of AI in this election – we’ll be keeping an eye out for all the ways it rears its head on the campaign trail.AI governance: Cultivating responsibility
Mustafa Suleyman, a prominent voice in the AI landscape and CEO & co-founder of Inflection AI, contends that effective regulation transcends legal frameworks—it encompasses a culture of self-regulation and informed regulatory comprehension. Today's AI leaders exhibit a unique blend of optimism and caution, recognizing both the transformative potential and potential pitfalls of AI technologies. Suleyman underscores the paradigm shift compared to the era of social media dominance.
This time, AI leaders have been proactive in raising concerns and questions about the technology's impact. Balancing innovation's pace with prudent safeguards is the goal, acknowledging that through collective efforts, the benefits of AI can far outweigh its drawbacks. Suleyman highlights that advanced AI models are increasingly controllable and capable of producing desired, safe outputs. He encourages external oversight and welcomes regulation as a proactive and thoughtful measure. The message is clear: the path to harnessing AI's power lies in fostering a culture of responsible development and collaborative regulatory action.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- The AI power paradox: Rules for AI's power ›
- Is life better than ever for the human race? - GZERO Media ›
Should the US government be involved with content moderation?
In a decision that sets up a monumental legal battle over the limits of the US government’s power to influence online speech, Louisiana-based District Court Judge Terry Doughty on Tuesday ruled that the Biden administration cannot contact social media platforms for the purpose of moderating content that is otherwise protected by the First Amendment.
What’s the background? The ruling came in a lawsuit filed by Missouri and Louisiana last year, which alleged that the Biden administration had coerced platforms like Twitter, Facebook, and YouTube into suppressing certain views about public health measures during the pandemic, the 2020 election results, and the economy. The government says it merely made suggestions to blacklist content that it believed would cause public health harm or undermine trust in US elections, and that it didn’t force anyone to do anything.
The philosophical question: Who gets to decide? On the one hand, anyone with eyes can see that social media enables lies and disinformation to proliferate at unprecedented speeds. Enlightenment-era notions of free speech designed for a world of hand-printed pamphlets seem potentially out of date today -- especially when algorithms that tailor content to partisan tastes have turned the “marketplace of ideas” into a warren of self-contained online kiosks.
But the question is whether the government should be allowed to police content that might otherwise be protected by the First Amendment. Supporters of government intervention say that yes, it’s important to quickly stop lies that could, say, harm public health, or undermine the credibility of elections.
Skeptics – at least the good faith ones – see it differently. In a world where facts may be black and white (no, the 2020 election was not “stolen,”), but viewpoints are grayer (experts still disagree about the efficacy of masking and lockdowns during the pandemic), it’s a fatal mistake, they say, for a democracy to allow the government to police online speech like this. After all, one administration’s “fake news” might soon be another’s "fair question."
The partisan dimension: Philosophical matters aside, the case has a partisan coloring. It was brought by GOP states, and the presiding Judge — a Trump appointee — noted in his opinion that the viewpoints targeted for suppression were mostly ones shared by “conservatives.” What's more, it comes amid a broader campaign by the GOP-controlled House to show that various government institutions have been “weaponized” against them.
Still, ordinary Americans’ views on social media regulation don’t follow party lines as much as you might think. A huge study by the Knight Foundation in 2022 found that a majority of Americans think social media companies contribute to societal divisions, and 90% say these platforms spread disinformation. In other words, people don't feel they can trust social media -- a big problem when traditional media are also suffering a long-running crisis of credibility.
But when it comes to solving these problems, things get muddier. Nearly four in five Americans say social media companies can’t be trusted to solve that problem themselves, but 55% say they prefer to keep government out of those decisions entirely.
While there is a hard-core wing of Democrats who fully support government regulation of online content, and a similar, if smaller, wing of Republicans who oppose any controls whatsoever, the Knight study found that roughly half of Americans’ views on these questions don’t correlate neatly with party affiliation — younger and more politically active internet users of all party affiliations, for example, tended to think social media companies should regulate themselves.
What comes next? The Biden administration will appeal the ruling, and Eurasia Group US expert Jon Lieber says it will likely go all the way to the Supreme Court. If so, the case could land in the docket right as the country enters the homestretch of the 2024 election campaigns. In the meantime, the ruling will limit the administration’s ability to police what it sees as disinformation in the run-up to the vote. Depending on who you are, you either think that’s a bad thing or a good thing.
Speaking of which, let us know what you think.Should the government be allowed to pressure social media companies to suppress content? If not, is there another way to deal with the problem of lies or disinformation online? Email us here, and please include your name and location if you’d like us to consider publishing your response in an upcoming edition of the Daily. Thanks!
Regulate AI, but how? The US isn’t sure
Calls to regulate AI are coming fast and furious now — including from industry pioneers themselves — but so far the world’s largest economy isn’t sure how to do it.
At a meeting this week with EU tech regulators, the Biden administration was reportedly “divided” over how strictly to police the emerging technology, as concerns swirl about AI’s potential to supercharge disinformation, learn and amplify human biases, make decisions that harm people in the real world, or use copyrighted materials to develop its own “thoughts.”
Europe, whose strict rules on digital privacy and competition already make it the baddest cop on the global tech beat, is moving toward strict new AI rules. These would force firms to disclose their underlying code and sources and clearly inform users when content has been generated by AI as opposed to human beings.
But Washington – more cautious on tech regulation overall – is still feeling things out. Unlike Europe, the US is home to leading AI firms whose competitiveness is important not only for the economy but, some say, for national security as well. After all, AI isn’t just a field of technology, it’s a battleground of geopolitics pitting Silicon Valley against (mainly) Chinese firms. Washington wants to find a balance that prevents harm without stifling innovation. And both sides will need to find common ground in order to avoid an even higher-stakes rerun of their ongoing spats over digital privacy.
How much should the US clamp down on AI firms? Please tell us what you think here.