VIDEOSGZERO World with Ian BremmerQuick TakePUPPET REGIMEIan ExplainsGZERO ReportsAsk IanGlobal Stage
Site Navigation
Search
Human content,
AI powered search.
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
GZERO AI Video
GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
Presented by
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, discusses how the emergence of ChatGPT and other generative AI tools have thrown a new dynamic into his teaching practice, and shares his insights into how colleges have attempted to handle the new phenomenon.
What does education look like in a world with generative AI?
The bottom line here is that we, students, universities, faculty, are simply in unchartered waters. I start teaching my digital policy class for the first time since the emergence of generative AI. I'm really unsure about how I should be handling this, but here are a few observations.
First, universities are all over the place on what to do. Policies range from outright bans, to updated citation requirements, to broad and largely unhelpful directives, to simply no policies at all. It's fair to say that a consensus has yet to emerge.
The second challenge is that AI detection software, like the plagiarism software we've used before it, are massively problematic. While there are some tools out there, they all suffer from several, in my view, disqualifying flaws. These tools have a tendency to generate false-positives, and this really matters when we're talking about academic integrity and ultimately plagiarism. What's more, research shows us that the use of these tools leads to an arms race between faculty trying to catch students and students trying to deceive. The other problem though, ironically, is that these tools may be infringing on students' copyright. When student essays are uploaded into these detection software, their writing is then stored and used for future detection. We've seen this same story with earlier generation plagiarism tools, and I personally want nothing to do with it.
Third, I think banning is not only impossible, but pedagogically irresponsible. The reality is that students, like all of us, have access to these tools and are going to use them. So, we need to move away from this idea that students are the problem and start focusing on how educators can improve their teaching instead.
However, I do worry that a key cognitive skillset that we develop at universities of reading and processing information and new ideas and developing ones on top of them is being lost. We need to ensure that our teaching preserves this.
Ultimately, this is going to be about developing new norms in old institutions, and we know that that is hard. We need new norms around trust in academic work, new methods of evaluating our own work and that of our students, teaching new skill sets and abandoning some old ones, and we need new norms for referencing and for acknowledging work. And yes, this means new norms around plagiarism. Plagiarism has been in the news a lot lately, but the status quo in an age of generative AI is simply untenable.
Perhaps I'm a Luddite on this, but I cannot let go of the idea entrenched in me that regardless of how a tool was used for research and developing ideas, that final scholarly products should ultimately be written by people. So, this term, I'm going to try a bunch of things and I'm going to see what works. I'll let you know what I learned. I'm Taylor Owen and thanks for watching.
Keep reading...Show less
More from GZERO AI Video
Europe’s AI deepfake raid
March 04, 2025
AI's existential risks: Why Yoshua Bengio is warning the world
October 01, 2024
How is AI shaping culture in the art world?
July 02, 2024
How AI models are grabbing the world's data
June 18, 2024
Can AI help doctors act more human?
June 04, 2024
How neurotech could enhance our brains using AI
May 21, 2024
OpenAI is risk-testing Voice Engine, but the risks are clear
April 03, 2024
Should we regulate generative AI with open or closed models?
March 20, 2024
AI and Canada's proposed Online Harms Act
March 05, 2024
Gemini AI controversy highlights AI racial bias challenge
February 29, 2024
When AI makes mistakes, who can be held responsible?
February 20, 2024
AI & human rights: Bridging a huge divide
February 16, 2024
Taylor Swift AI images & the rise of the deepfakes problem
February 06, 2024
Will Taylor Swift's AI deepfake problems prompt Congress to act?
February 01, 2024
Davos 2024: AI is having a moment at the World Economic Forum
January 16, 2024
AI in 2024: Will democracy be disrupted?
December 20, 2023
New AI toys spark privacy concerns for kids
December 12, 2023
GZERO Series
GZERO Daily: our free newsletter about global politics
Keep up with what’s going on around the world - and why it matters.






























