We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI policy formation must include voices from the global South
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.
This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.
But what I really want to focus on is the role of people in the Global South, and how they're underrepresented in discussions about both what AI means in their local context and how they participate in debates around policy, if they do at all. Because right now, our focus is way too much on the three big policy blocks, China, the US and Europe.
Also because of course, a lot of industry is here around the corner in Silicon Valley. But I've learned so much from listening to people who focus on the African continent, where there are no less than 2000 languages. And, many questions about what AI will mean for those languages, for access for people beyond just the exploitative and attractive model, based on which large language models are trained with cheap labor from people in these developing countries, but also about how harms can be so different.
For example, the disinformation tends to spread with WhatsApp rather than social media platforms and that voice, through generative AI. So synthetic voice is one of the most effective ways to spread disinformation. Something that's not as prominently recognized here, where there's so much focus on text content and deepfakes videos, but not so much on audio. And then, of course, we talked about elections because there are a record number of people voting this year and disinformation around elections, tends to pick up.
And AI is really a wild card in that. So I take away that we just need to have many more conversations, not so much, about AI in the Global South and tech policy there, but listening to people who are living in those communities, researching the impact of AI in the Global South, or who are pushing for fair treatment when their governments are using the latest technologies for repression, for example.
So lots of fruitful thought. And, I was very grateful that people made it all the way over here to share their perspectives with us.
Israel's Lavender: What could go wrong when AI is used in military operations?
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.
OpenAI is risk-testing Voice Engine, but the risks are clear
About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.
And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.
And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.
So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.
So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Should we regulate generative AI with open or closed models?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.
We just finished a half week workshop that dealt with the billion-dollar question of how to best regulate generative AI. And often this discussion tends to get quite tribal between those who say, “Well, open models are the best route to safety because they foster transparency and learning for a larger community, which also means scrutiny for things that might go wrong,” or those that say, “No, actually closed and proprietary models that can be scrutinized by a handful of companies that are able to produce them are safer because then malign actors may not get their hands on the most advanced technology.”
And one of the key takeaways that I have from this workshop, which was kindly hosted by Princeton's Institute for Advanced Studies, is actually that the question of open versus closed models, but also the question of whether or not to regulate is much more gradient. So, there is a big spectrum of considerations between models that are all the way open and what that means for safety and security,
Two models that are all the way closed and what that means for opportunities for oversight, as well as the whole discussion about whether or not to regulate and what good regulation looks like. So, one discussion that we had was, for example, how can we assess the most advanced or frontier models in a research phase with independent oversight, so government mandated, and then decide more deliberately when these new models are safe enough to be put out into the market or the wild.
So that there is actually much less of these cutting, cutting throat market dynamics that lead companies to just push out their latest models out of concern that their competitor might be faster and that there is oversight built in that really considers, first and foremost, what is important for society, for the most vulnerable, for anything from national security to election integrity, to, for example, nondiscrimination principles which are already under enormous pressure thanks to AI.
So, a lot of great takeaways to continue working on. We will hopefully publish something that I can share soon, but these were my takeaways from an intense two and a half days of AI discussions.
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Deepfakes and dissent: How AI makes the opposition more dangerous
Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.