scroll to top arrow or icon

{{ subpage.title }}

How neurotech could enhance our brains using AI
How neurotech could enhance our brains using AI | GZERO AI

How neurotech could enhance our brains using AI

In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, explores the immense potential of neurotechnology. On the heels of Elon Musk's brain implant company Neuralink making the headlines again, he examines how this technology, now turbocharged by artificial intelligence, could transform our lives. However, it is not without potential pitfalls, which call for regulatory discussions surrounding its use.
Read moreShow less
Will AI further divide us or help build meaningful connections?
Will AI further divide us or help build more connections? | GZERO AI

Will AI further divide us or help build meaningful connections?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.

Read moreShow less
AI and war: Governments must widen safety dialogue to include military use
AI and war: Governments must widen safety dialogue to include military use | GZERO AI

AI and war: Governments must widen safety dialogue to include military use

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Marietje insists that governments must prioritize establishing guardrails for the deployment of artificial intelligence in military operations. Already, there are ongoing endeavors ensuring that AI is safe to use but, according to her, there's an urgent need to widen that discussion to include its use in warfare—an area where lives are at stake.
Read moreShow less
AI policy formation must include voices from the global South
AI policy formation: The dire need for diverse voices | GZERO AI

AI policy formation must include voices from the global South

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.

This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.

Read moreShow less
Israel's Lavender: What could go wrong when AI is used in military operations?
Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI

Israel's Lavender: What could go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.
Read moreShow less
OpenAI is risk-testing Voice Engine, but the risks are clear
- YouTube

OpenAI is risk-testing Voice Engine, but the risks are clear

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.
Read moreShow less
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
Social media's AI wave: Are we in for a “deepfakification” of the entire internet? | GZERO AI

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.

Read moreShow less
Should we regulate generative AI with open or closed models?
Title Placeholder | GZERO AI

Should we regulate generative AI with open or closed models?

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest