We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Israel's Lavender: What could go wrong when AI is used in military operations?
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.
Hard Numbers: Amazon’s AI ambitions, what to use ChatGPT for, energy crisis, Enter Stargate
2.75 billion: Amazon invested an additional $2.75 billion in the AI startup Anthropic, which makes the popular chatbot Claude, brings their total investment to around $4 billion, while Google also has a $2 billion stake in the company. The big tech giants like Amazon, Google, and Microsoft, with its $13 billion deal with OpenAI, have chosen investments and strategic partnerships instead of buying startups outright. Amazon also announced it’ll spend $150 billion on data centers over the next 15 years to support its AI ambitions.
2: 20% of US adults say they’ve used ChatGPT for work, up from 12% just six months ago, according to a new survey by Pew Research Center. But only 2% of Americans surveyed said they’ve used the chatbot to gather information about the country’s upcoming elections—a good sign for people worrying about the immediate impact of AI tools that have a tendency to make stuff up.
4: The electricity used by data centers, cryptocurrency, and artificial intelligence represented nearly 2% of global energy use in 2022, according to the International Energy Agency. That number could double to 4% by 2026 if current trends continue.
100 billion: Microsoft and OpenAI are reportedly teaming up to build data centers along with a supercomputer, nicknamed “Stargate,” to power their artificial intelligence systems. The project, which still has yet to be greenlit, could cost a staggering $100 billion.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Deepfakes and dissent: How AI makes the opposition more dangerous
Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith
At the Munich Security Conference, leading tech companies unveiled a new accord that committed them to combating AI-generated content that could disrupt elections.
During a Global Stage panel on the sidelines of this year’s conference, Microsoft Vice Chair and President Brad Smith said the accord would not completely solve the problem of deceptive AI content but would help “manage this new reality in a way that will make a difference and really serve all of the elections… between now and the end of the year.”
As Smith explains, the accord is designed to bring the tech industry together to preserve the “authenticity of content,” including via the creation of content credentials. The industry will also work to detect deepfakes and provide candidates with a mechanism to report them, says Smith, while also taking steps to “promote transparency and education.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How AI threatens elections ›
- How to protect elections in the age of AI ›
Deepfakes are ‘fraud,’ says Microsoft's Brad Smith
The rapid rise of AI has presented a wide array of challenges, particularly in terms of finding a balance between protecting the right to free expression and safeguarding democracy from the corrosive effects of misinformation.
But Microsoft Vice Chair and President Brad Smith says freedom of expression does not apply to deepfakes — fake images or videos created via AI, which can involve using someone else’s face and/or voice without their permission. During a Global Stage panel on AI and elections at the Munich Security Conference, Smith unequivocally decried deepfakes as a form of “fraud.”
“The right to free expression gives me the right to stand up and say what is on my mind,” says Smith, adding, “I do not have the right to steal and use your voice. Your voice belongs to you and you alone… Let's give people the right to say what they think. Let's not steal their voice and put words in their mouth.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- Will comedy deepfakes generate laughs or lawsuits? ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfake porn targets high schoolers ›
- Deepfake it till you make it ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act ... ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›