Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How will Trump 2.0 impact AI?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the five broad worries of the implication of the US election on artificial intelligence.
I spent the past week in the UK and Europe talking to a ton of people in the tech and democracy community. And of course, everybody just wanted to talk about the implications of the US election. It's safe to say that there's some pretty grave concerns, so I thought I could spend a few minutes, a few more than I usually do in these videos outlining the nature and type of these concerns, particularly amongst those who are concerned about the conflation of power between national governments and tech companies. In short, I heard five broad worries.
First, that we're going to see an unprecedented confluence of tech power and political power. In short, the influence of US tech money is going to be turbocharged. This, of course, always existed, but the two are now far more fully joined. This means that the interests of a small number of companies will be one in the same as the interests of the US government. Musk's interests, Tesla, Starlink, Neuralink are sure to be front and center. But also companies like Peter Thiel's Palantir and Palmer Luckey's Anduril are likely to get massive new defense contracts. And the crypto investments of some of Silicon Valley's biggest VCs are sure to be boosted and supported.
The flip side of this concentrated power to some of Silicon Valley's more libertarian conservatives is that tech companies on the wrong side of this realignment might find trouble. Musk adding Microsoft to his OpenAI lawsuit is an early tell of this. It'll be interesting to see where Zuckerberg and Bezos land given Trump's animosity to both.
Second, for democratic countries outside of the US, we're going to see a severe erosion of digital governance sovereignty. Simply put, it's going to become tremendously hard for countries to govern digital technologies including online platforms, AI, biotech, and crypto in ways that aren't aligned with US interests. The main lever that the Trump administration has to pull in this regard are bilateral trade agreements. These are going to be the big international sticks that are likely to overwhelm tech policy enforcement and new tech policy itself.
In Canada, for example, our News Media Bargaining Code, our Online Streaming Act and our Digital Services Tax are all already under fire by US trade disputes. When the USMCA is likely reopened, expect for these all to be on the table, and for the Canadian government, whoever is in power to fold, putting our reliance on US trade policy over our digital policy agenda. The broader spillover effect of this trade pressure is that countries are unlikely to develop new digital policies over the time of the Trump term. And for those policies that aren't repealed, enforcement of existing laws are likely to be slowed down or halted entirely. Europe, for example, is very unlikely to enforce Digital Services Act provisions against X.
Third, we're likely to see the silencing of US researchers and civil society groups working in the tech and democracy space. This will be done ironically in the name of free speech. Early attacks from Jim Jordan against disinformation researchers at US universities are only going to be ramped up. Marc Andreessen and Musk have both called for researchers working on election interference and misinformation to be prosecuted. And Trump has called for the suspension of nonprofit status to universities that have housed this work.
Faced with this kind of existential threat, universities are very likely to abandon these scholars and their labs entirely. Civil society groups working on these same issues are going to be targeted and many are sure to close under this pressure. It's simply tragic that efforts to better understand how information flows through our digital media ecosystem will be rendered impossible right at the time when they're needed the most. At a time when the health and the integrity of our ecosystem is under attack. All in the name of protecting free speech. this is Kafka-esque to say the least.
Fourth, and in part as a result of all of the above, internationally, we may see new political space opened up for conversations about national communications infrastructure. For decades, the driving force in the media policy debate has been one of globalization and the adoption of largely US-based platforms. This argument has provided real headwind to those who, like in previous generations, urged for the development of national capacities and have protectionist media policy. But I wonder how long the status quo is tenable in a world where the richest person in the world owns a major social media platform and dominates global low-orbit broadband.
Does a country like Canada, for example, want to hand our media infrastructure over to a single individual? One who has shown careless disregard for the one media platform he already controls and shapes? Will other countries follow America's lead if Trump sells US broadcast licenses and targets American journalism? Will killing Section 230 as Trump has said to want to do, and the limits that that will place on platforms moderating even the worst online abuse, further hasten the enforcement of national digital borders?
Fifth and finally, how things play out for AI is actually a bit of a mystery, but I'm sure will likely err on the side of unregulated markets. While Musk may have at once been a champion of AI regulation and had legitimate concerns about unchecked AGI, he now seems more concerned about the political bias of AI than about any sort of existential risk. As the head of a new government agency mandated to cut a third of the federal government budget, Musk is more likely to see AI as a cheap replacement for human labor than as a threat that needs a new agency to regulate.
In all of this, one thing is for certain, we really are in for a bumpy ride. For those that have been concerned about the relationship between political and tech power for well over a decade, our work has only just begun. I'm Taylor Owen and thanks for watching.
AI's existential risks: Why Yoshua Bengio is warning the world
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
AI is turbocharging the stock market, but is it all hype?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, explores how artificial intelligence is turbocharging the stock market and transforming our economy. With AI driving the S&P 500 to new heights and drastically boosting NVIDIA's stock, researchers predict a future where we could be 1,000 times wealthier. However, Owen raises critical questions about whether this rapid growth is sustainable or simply a bubble ready to burst.
So whatever your lingering skepticism of this current moment of AI hype might be, one thing is undeniable: AI is turbocharging the stock market and the economy more broadly.
The S&P 500 hit an all-time high this year, largely driven by AI. NVIDIA's stock jumped 700% since the launch of ChatGPT, at one point becoming the most valuable company in the world. And some researchers think this is going to get even crazier. They argue that we could see 30% per capita economic growth by 2100 because of AI. What this means is that in 25 years of 30% per capita growth, we would be 1,000 times richer than we are now.
But what are these wild predictions based on? Really comes down till human labor being replaced by AI. These economists argue that AI could replace humans and that machines could do all sorts of things that humans can't or things that we can already do much better. Perhaps more importantly though, humans aren't constrained by the number of humans we have in the workforce. We can scale labor in a way unconstrained by human capacity. This fundamentally changes, they argue, the core dynamics of the economy.
But these are still predictions, and they're wild speculations, and they're often being promoted by the very same people who will benefit the most from the hype around AI. There's just no good evidence at this point that these things are necessarily going to come to fruition. And even if this wealth is generated, this 1,000 times richer than we are now wealth, there's no guarantee how it's going to be distributed, who will get it, who will benefit and who won't. It's pretty clear that wealth is likely to trickle up to those that own and control these technologies, as they have in the past. It's also clear that those that are most precarious in the workforce will be the most vulnerable and likely the most harmed. This is almost certainly, if we're talking about machines replacing humans, going to be women and minorities who are overrepresented in the service workforce.
Some argue that UBI could be a solution to this, that we should simply take this excess wealth and distribute it to all of us so that we don't have to work. But there's a real problem here. People find meaning in their work. I recently spoke to Rana Foroohar, a global economic reporter for the Financial Times, and she made this case really powerfully to me that we derive meaning from work, and if you take that away, there are going to be serious political repercussions. We've already started to see these. Because of all of this, Rana thinks we're in a bubble. She thinks the economy simply can't run this hot for this long. It would be historically unprecedented, she argues, for this to go on for very much longer. But also the narratives she argues about why this economic growth is going to happen simply are too tenuous to support the economic growth that's being built on it. And this for her is a clear sign that we're in a bubble. When you have a single narrative that doesn't allow for any contradictions, this clear narrative of a certainty of a path that is supporting a huge amount of economic activity, that is the sign of a bubble.
Finally, she argues that economic growth is simply too concentrated. Too few people are seeing the benefits of this at the moment. These are the six or seven tech companies who are responsible for the bulk of the value being generated around AI. This concentration is not broadly good for society. If a tech bubble collapses though, we are all on the hook for it. Like any bubble, we as a society, our pension funds, our investments, our retirements, the rest of the economy are being floated by this bubble, so we need to think really carefully about how and when this deflates.
How is AI shaping culture in the art world?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, recounts his conversation with media theorist Douglas Rushkoff about the cultural implications of the ongoing AI revolution, which raised a couple of questions: Will AI enhance cultural production, similar to Auto-Tune and Photoshop, or produce art that truly moves society. Will people even care about its role in cultural production? However, Owen notes that current AI-generated content often lacks the cultural depth that our art and culture demand.
So, I recently had a wonderful conversation with the media theorist Douglas Rushkoff about what this current moment in AI means for our culture.
For the past 30 years, Rushkoff has been chronicling the relationship between emerging technologies and the response of our cultural production. And in our conversation, he referenced a really wonderful Neil Postman observation. Neil Postman, the great media theorist who came up with the idea of "amusing ourselves to death." When Postman was asked to describe what media is, he said, "That media is a medium in which culture grows. It's the Petri dish in which we develop culture as a society." It's a wonderful metaphor, and that left me wondering: if a medium is the thing in which culture grows, what kind of culture is growing from AI?
Will this culture be more like Auto-Tune or Photoshop, so cultural production that's augmented by AI? And what kind of art will be built with AI? Made with AI? Will it be used to create the equivalent of art in a bathroom, as Rushkoff pointed out? Or to make real art that impacts us and moves us as a society? And how will we as citizens know the role that AI played in cultural production? Will we care? Will we want something like GMO or organic labels for a cultural production that leverages AI? Or will we demand AI-free spaces, as are starting to emerge, places online and in the physical world that are guaranteed to have not been touched by AI? And if we do know that art is driven by AI, created by AI in its entirety, will we even care?
And I'm very skeptical of this. I worry that we won't. And I think when I look at the world of a culture being created by AI, I see a dulling. My Twitter feed is flooded with AI-generated crap, and I'm just not seeing the whimsical, delightful, powerful, and important cultural content created by AI that we need as a society, that we demand of our art and culture.
I hope this changes. I really do. And I hope part of how we view the evolution of AI, in our society, should be from the lens of what kind of culture it is building. I'm Taylor Owen, and thanks for watching.
- How AI models are grabbing the world's data ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI is turbocharging the stock market, but is it all hype? - GZERO Media ›
- AI's evolving role in society - GZERO Media ›
- Can the UN get the world to agree on AI safety? - GZERO Media ›
How AI models are grabbing the world's data
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, examines the scale and implications of the historic data land grab happening in the AI sector. According to researcher Kate Crawford, AI is the largest superstructure ever built by humans, requiring immense human labor, natural resources, and staggering amounts of data. But how are tech giants like Meta and Google amassing this data?
So AI researcher Kate Crawford recently told me that she thinks that AI is the largest superstructure that our species has ever built. This is because of the enormous amount of human labor that goes into building AI, the physical infrastructure that's needed for the compute of these AI systems, the natural resources, the energy and the water that goes into this entire infrastructure. And of course, because of the insane amounts of data that is needed to build our frontier models. It's increasingly clear that we're in the middle of a historic land grab for these data, essentially for all of the data that has ever been created by humanity. So where is all this data coming from and how are these companies getting access to it? Well, first, they're clearly scraping the public internet. It's safe to say that if anything you've done has been posted to the internet in a public way, it's inside the training data of at least one of these models.
But it's also probably the case that these scraping includes a large amount of copyrighted data, or not publicly necessarily available data. They're probably also getting behind paywalls as we'll find out soon enough as the New York Times lawsuit against OpenAI works its way through the system and they're scraping each other's data. According to the New York Times, Google found out that OpenAI was scraping YouTube, but they didn't reveal it or push or reel it to the public because they too were scraping all of YouTube themselves and didn't just want this getting out. Second, all these companies are purchasing or licensing data. This includes news licensing entering into agreements with publishers, data purchased from data brokers, purchasing companies, or getting access to company datas that have rich data sets. Meta, for example, was considering buying the publisher Simon and Schuster just for access to their copyrighted books in order to train their LLM.
The companies that have access to rich data sets themselves are obviously an advantage here. And in particular, this is Meta and Google. Meta uses all the public data that's ever been inputted into their system. And it said that even if you aren't even on their products or use their product, your data could be in their systems, either from data they've purchased outside of their products, or if you've just appeared, for example, in an Instagram photo, your face is now being used to train their AI. Google has said that they use anything public that's on their platform. So an unrestricted Google Doc, for example, will end up in their training dataset. And they're also acquiring data in creative ways to say the least. Meta has trained its large language model on a dataset called book3, which contains over 170,000 pirated and copyrighted books. So where does this all leave us citizens and users of the internet?
Well, one thing's clear is that we can't opt out of this data collection and data use. Meta's opt out tool they provide is hidden and complicated to use, and it requires you to provide proof that our data has been used to train Meta's AI system before they'll consider removing it from their data sets. This is not the kind of user tools that we should expect in democratic societies. So it's pretty clear that we're going to need to do three things. One, we're going to need to scale up our journalism. This is exactly why we have investigative journalism, is to hold powerful governments and actors and corporations in our society to account. Journalism needs to dig deep into who's collecting what data, how these models are being trained, and how they're being built on data collected on our lives and our online experiences. Second, the lawsuits are going to need to work their way through the system and the discovery that comes with them should be revealing. The New York Times' lawsuit just to take one of the many against OpenAI, will surely reveal whether paywall journalism sits within the training models of these AI systems. And finally, there is absolutely no doubt that we need regulation to provide transparency and accountability of the data collection that is driving AI.
Meta recently announced, for example, that they were going to use data they'd collected on EU citizens in training their LLM. Immediately after the Irish Data Protection Commission pushed back, they announced they were going to pause this activity. This is why we need regulations. People who live in countries or jurisdictions that have strong data protection regulations and AI transparency regimes will ultimately be better protected. I'm Taylor Owen and thanks for watching.
- Electric Company and Water Works ›
- America’s first data security executive order ... underwhelms ›
- Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM ›
- Can data and AI save lives and make the world safer? ›
- How is AI shaping culture in the art world? - GZERO Media ›
Can AI help doctors act more human?
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, explores the rather surprising role artificial intelligence could play in the healthcare industry's efforts to reconnect with humanity. Doctors have become busier and are spending less time with their patients, but AI has been touted as a solution to enable them to foster more human connections.
So, if there's one sector of our economy and our society that could use some real transformation, it's our healthcare system. No matter where you live around the world, no matter what your healthcare financial model is, it is almost certainly letting you down in some way. And at the core of this is the relationship between a doctor and a patient.
As doctors have become busier and they've been tasked with more and more responsibilities, they are spending less time with us, their patients. In the US, the average doctor's appointment is only 7 minutes long. In South Korea, it's 2 minutes. And in the US, one of the consequences of this is that there are 12 million significant misdiagnoses each year, 800,000 of which result in death or disability.
Cardiologist, medical researcher, and author, Eric Topol, says, "Medicine has become inhuman." Paradoxically, though, Topol thinks AI could make it more human. In its most basic form, this means bringing AI into the patient-doctor conversation. This could mean AI transcribing our conversations and allowing a doctor to pay attention to us, rather than typing on a computer screen. It also opens up the range of capacities that they could be assisted with by AI. This could be making our future appointments, following up on our treatment plans, or perhaps more powerfully, helping with diagnoses itself. A doctor has very limited views into our current condition. And AI might have far greater visibility. Just take radiology scans. Topol says an AI could add superhuman eyes to the doctor's capacity. When a radiology scan is ordered, the radiologist is typically told to look for one specific thing, but an AI could look for everything, and would have access to potentially rich and detailed views of our health history. Retina scans are another example. An AI can detect everything from diabetes, to kidney disease, to potentially Alzheimer's, just by looking in our eyes.
Another powerful potential here is in forecasting the future. The healthcare profession is not just about diagnosing our current conditions but should be about helping protect us from potential future conditions. And an AI can help process reams of data about our body, our health history, our family history, our genetics, and potentially predict what we are most susceptible to in the future. So, could we potentially use AI 20 years before someone evolves into a condition, such as Alzheimer's, and to help develop treatment and medical and lifestyle adjustment plans accordingly? The potential really is there.
And one thing seems overwhelmingly clear, that this is going to utterly transform what it means to be a doctor. Doctors will not have to memorize things and repeat by rote conditions from a textbook, as we currently train them. But instead, we might screen doctors for their human relationships, for their emotional intelligence, and for their empathy. As Topol says, this might ultimately mean a shift in the system from curing to healing.
I'm Taylor Owen, and thanks for watching.
How neurotech could enhance our brains using AI
So earlier this year, Elon Musk's other company, Neuralink, successfully installed a brain implant in a 29-year-old quadriplegic man's brain named Noland Arbaugh. Last week, this story got a ton of attention when Neuralink announced that part of this implant had malfunctioned. But I think this news cycle and all the hyperbole around it misses the bigger picture.
Let me explain. So first, this Neuralink technology is really remarkable. It allowed Arbaugh to play chess with his mind, which he showed in his demo. But the potential beyond this really is fast. It's pretty early days for this technology, but there are signs that it might help us eliminate painful memories, repair lost bodily functions, and allow us to maybe even communicate with each other telepathically.
Second, this brain implant neurotechnology is part of a wider range of neuro tech. A second category isn't implanted in your body, but instead it sits on your body or near it, and picks up electric signals of your brain. These devices, which are being developed, by Meta and Apple, among many others, are more akin to health tracking devices, except they open up access to our thoughts.
Third point here is that this tech is an example of an adjacent technology to AI being turbocharged by recent advances in AI. One of the challenges of neurotech has been how to make sense of all of this data coming from our brains. And here is where AI becomes really powerful. We now increasingly have the power to give these data from our minds, meaning. The result is that the technology and corporations developing them have access to the most private data we have, data about what we think. Which of course brings up the bigger point here, which is we're on the cusp of getting access to our brain data and the potential and of abuse for this really is vast. And it's already happening.
Governments are already using neurotech to try and read their citizens minds, and corporations are working on ways to advertise to potential customers in their dreams. And finally, I think this shows very clearly that we need to be thinking about regulation and fast. Nita Farahany, who has recently written a book about the future of neurotechnology, called, “The Battle for Your Brain: Defending the right to Think Freely in the Age of Neurotechnology,” thinks we have a year to figure out the governance of this tech. A year, it's moving that fast. So many in the AI debate are debating and discussing the existential risks of AI, we might want to pay some attention to the technologies that are adjacent to AI and being empowered by it, as they likely present a far more immediate challenge.
I'm Taylor Owen, and thanks for watching.
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›