We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Taylor Owen
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, explores the issues of responsibility and trust with the widespread deployment of AI. Who bears responsibility when AI makes errors? Additionally, can we rely on AI, and should we trust it?
So last week, a Canadian airline made headlines when a customer sued its chatbot. Not only is this story totally weird, but I think it might give us a hint at who will ultimately be responsible when AI messes up. So, this all started when Jake Moffatt's grandmother passed away and he went to the Air Canada website to see if they had a bereavement policy. He asked the chatbot this question, which told him to book the flight and that he had 90 days to request a refund. It turns out though, that you can't request bereavement refunds retroactively, a policy stated elsewhere on the Air Canada website. But here's where it gets interesting. Moffatt took Air Canada and their AI chatbot to British Columbia's Civil Resolution Tribunal, a sort of small claims court. Air Canada argued that the chatbot is a separate legal entity that is responsible for its own actions.
The AI is responsible here. They lost though and were forced to honor a policy that a chatbot made up. They've since deleted their chatbot. This case is so interesting because I think it strikes at two of the questions at the whole core of our AI conversation, responsibility and trust.
First, who's responsible when AI gets things wrong? Is Tesla responsible when their full self-driving car kills somebody? Is a newspaper liable when its AI makes things up and defames somebody? Is a government responsible for false arrests using facial recognition AI? I think the answer is likely to be yes for all of these, and this has huge implications.
Second, and maybe more profound though, is the question of whether we can and should trust AI? Anyone who watched the Super Bowl ads this year will know that AI companies are worried about this. AI has officially kicked off its PR campaign and at the core of the PR campaign is the question of trust.
According to a recent Pew Study, 52% of Americans are more concerned than they are excited about the growth of AI. So, for the people selling AI tools, this could be a real problem. A lot of these ads then seek to build public trust in the tools themselves. The ad for Microsoft Copilot, for example, shows people using AI assistant to help them write a business plan and to drop storyboards for a film to make their job better, not take it away. The message is clear here, "We're going to help you do your job better, trust us." Stepping back though, the risk of being negligent and moving fast and breaking things is that trust is really hard to earn back once you've lost it, just ask Facebook.
In Jake Moffatt's Air Canada case, all that was at stake was a $650 refund, but with AI starting to permeate every facet of our lives, it's only a matter of time before the stakes are much, much higher.
I'm Taylor Owen, and thanks for watching.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines how Taylor Swift's plight with AI deepfake porn sheds light on the complexities of the information ecosystem in the biggest election year ever, which includes the US elections.
Okay, so full disclosure, I don't love the NFL and my ten-year-old son is more into Ed Sheeran than Taylor Swift, so she hasn't yet flooded our household. However, when one of the most famous people in the world is caught in a deepfake porn attack driven by a right-wing conspiracy theory, forcing one of the largest platforms in the world to shut down all Taylor Swift-related content, well, now you have my attention. But what are we to make of all this?
First thing I think is it shows how crazy this US election cycle is going to be. The combination of new AI capabilities, unregulated platforms, a flood of opaque super PAC money, and a candidate who's perfectly willing to fuel conspiracy theories means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some of the policy levers that could be pulled to address this problem. The Defiance Act, tabled in the Senate last week, gives victims of deepfakes the right to sue the people who created them. The Preventing Deepfakes of Intimate Images Act, stuck in the House currently, goes a step further and puts criminal liability on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms, not just the AI that creates the deepfakes, because the main problem with this content is not the ability to create them, we've had that for a long time. It's the ability to disseminate them broadly to a large number of people. That's where the real harm lies. For example, one of these Taylor Swift videos was viewed 45 million times and stayed up for 17 hours before it was removed by Twitter. And the #TaylorSwiftAI was boosted as a trending topic by Twitter, meaning it was algorithmically amplified, not just posted and disseminated by users. So what I think we might start seeing here is a slightly more nuanced conversation about the liability protection that we give to platforms. This might mean that they are now liable for content that is either algorithmically amplified or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here. And probably, for the content regulations we may need, we're going to need to look to Europe, to the UK, to Australia, and this year to Canada.
So what should we actually be watching for? Well, one thing I would look for is how the platforms themselves are going to respond to what is both now an unavoidable problem, and one that has certainly gotten the attention of advertisers. When Elon Musk took over Twitter, he decimated their content moderation team. But Twitter's now announced that they're going to start rehiring one. And you better believe they're doing this not because of the threat of the US Senate but because of the threat of their biggest advertisers. Advertisers do not want their content but put aside politically motivated, deepfake pornography of incredibly popular people. So that's what I'd be watching for here. How are the platforms themselves going to respond to what is a very clear problem, in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, discusses how the emergence of ChatGPT and other generative AI tools have thrown a new dynamic into his teaching practice, and shares his insights into how colleges have attempted to handle the new phenomenon.
What does education look like in a world with generative AI?
The bottom line here is that we, students, universities, faculty, are simply in unchartered waters. I start teaching my digital policy class for the first time since the emergence of generative AI. I'm really unsure about how I should be handling this, but here are a few observations.
First, universities are all over the place on what to do. Policies range from outright bans, to updated citation requirements, to broad and largely unhelpful directives, to simply no policies at all. It's fair to say that a consensus has yet to emerge.
The second challenge is that AI detection software, like the plagiarism software we've used before it, are massively problematic. While there are some tools out there, they all suffer from several, in my view, disqualifying flaws. These tools have a tendency to generate false-positives, and this really matters when we're talking about academic integrity and ultimately plagiarism. What's more, research shows us that the use of these tools leads to an arms race between faculty trying to catch students and students trying to deceive. The other problem though, ironically, is that these tools may be infringing on students' copyright. When student essays are uploaded into these detection software, their writing is then stored and used for future detection. We've seen this same story with earlier generation plagiarism tools, and I personally want nothing to do with it.
Third, I think banning is not only impossible, but pedagogically irresponsible. The reality is that students, like all of us, have access to these tools and are going to use them. So, we need to move away from this idea that students are the problem and start focusing on how educators can improve their teaching instead.
However, I do worry that a key cognitive skillset that we develop at universities of reading and processing information and new ideas and developing ones on top of them is being lost. We need to ensure that our teaching preserves this.
Ultimately, this is going to be about developing new norms in old institutions, and we know that that is hard. We need new norms around trust in academic work, new methods of evaluating our own work and that of our students, teaching new skill sets and abandoning some old ones, and we need new norms for referencing and for acknowledging work. And yes, this means new norms around plagiarism. Plagiarism has been in the news a lot lately, but the status quo in an age of generative AI is simply untenable.
Perhaps I'm a Luddite on this, but I cannot let go of the idea entrenched in me that regardless of how a tool was used for research and developing ideas, that final scholarly products should ultimately be written by people. So, this term, I'm going to try a bunch of things and I'm going to see what works. I'll let you know what I learned. I'm Taylor Owen and thanks for watching.
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen looks at the first election in Taiwan and the implications it could have for the future of technology, including AI.
Hi, I'm Taylor Owen. This is GZERO AI. So welcome to 2024, the year where over 50 Democratic countries head to the polls. And we're only a few days away from the first.
On January 13, Taiwanese voters will head to the ballot to elect a new president in an election that could have a profound effect on the global economy and on the future of AI. Let me explain. So the front-runner in this election is Lai Ching-te, a member of the incumbent Democratic Progressive Party. Lai is generally viewed as being in favor of Taiwanese independence, but the Chinese Communist Party has called him a separatist with a confrontational mentality.
But what does this have to do with the future of AI? Well, it all revolves around a single company, the Taiwan Semiconductor Manufacturing Company or TSMC. TSMC makes more than 90% of the world's most advanced chips, the kinds of chips that power much of artificial intelligence. And they make those chips on the Western coast of Taiwan, only 110 miles from mainland China.
So let's assume that Democratic Progressive Party wins, as many expect they will, and that the conflict with Beijing escalates. Well, what happens then? Well, it seems to me there are at least two possibilities. One is that because China is so dependent on TSMC, as we all are, for their chips, that they wouldn't risk an actual attack. This is often referred to as Taiwan Silicon Shield, a kind of new era of mutually assured destruction.
The other possibility, though, is that China does attack Taiwan. And if that happens, it's not inconceivable that Taiwan would preemptively destroy TSMC's manufacturing facilities. And even if China did take control, before that happens, it's unlikely they could continue production. Chip manufacturing is just too contingent on global cooperation.
If TSMC ultimately goes down, the global technology industry could be thrown into turmoil. Virtually no country in the world would be able to build cell phones or cell phone towers. PC production would fall by at least a third, maybe half, and everything from the appliance industry to the automotive industry would take a hit. It would be a global economic crisis, and the progress on AI would be set back years.
While it remains to be seen how this story will play out, one thing is really clear. The global computing industry has a number of incredibly vulnerable choke points, companies like TSMC that an entire industry is dependent on. While diversifying something as complex as chip manufacturing will be difficult and require a ton of capital and real democratic leadership, it may be essential if you want to stabilize the industry. Otherwise, the future of technology may be vulnerable to the whims of volatile players like the CCP.
I'm Taylor Owen and thanks for watching.
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks at a new phenomenon in the AI industry: interactive toys powered by AI. However, its interactivity function comes with a host of privacy concerns. According to Owen, it doesn't end there.
So, it's that time of year where I start thinking, admittedly far too late, about my holiday shopping. And because I have a ten-year-old child, this means that I am seeing a lot of ads for new kids’ toys. Kids have had interactive toys for decades. Remember Tickle Me Elmo?
But now these interactive toys are being powered by AI. For example, for $1500, you can buy your kid a Moxie robot. My name is Moxie. I am a new robot. What is your name? Moxie is sort of like a robotic best friend. When your kid talks to it, Moxie records those conversations and then uses technology powered by OpenAI to analyze those interactions and react back.
Embodied, the company that makes Moxie, says that this helps kids regulate their emotions, provides them with companionship, and boost their self-esteem. All of which sounds great, but toys like this should also give us pause. Let me explain. A toy like this comes with a whole host of privacy concerns. Moxie records video and audio of your child and then analyzes that data to create facial expression and user image data.
Now they say they don't store the audio and video recordings, but they do keep the metadata about your child's facial expressions and how they're interacting with the toy. Embodied says it's ultimately parents’ responsibility to ensure that their child isn't giving out personal data. But I don't know., that seems unlikely for a toy that's designed to be your child's digital best friend.
These types of privacy concerns, of course, aren't new. Home assistants like Amazon Alexa and other smart appliances also record and mine your data. And big tech companies aren't likely to move away from this kind of practice, as data collection is essential to their market power. It's pretty clear we're extending this collection practice into the lives of our children.
While privacy concerns with toys like these are well-established, there's another issue that I think requires some thought. How will toys like these affect childhood development? There's a chance these toys could become a powerful tool in helping kids learn and grow. Embodied claims that 71% of the kids that use Moxie saw improved social skills. But this also represents a pretty radical new frontier in childhood development.
What happens when kids are being socialized with robots instead of with other kids? It's often said that AI is going to transform our society, but this may not be a binary event. Sometimes the effect of AI is going to creep into our lives slowly. Kids toys, slowly but surely becoming agents, may be one way this happens.
I'm Taylor Owen and thanks for watching.