ChatGPT on campus: How are universities handling generative AI?

AI on campus: How are universities navigating a new phenomenon? | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, discusses how the emergence of ChatGPT and other generative AI tools have thrown a new dynamic into his teaching practice, and shares his insights into how colleges have attempted to handle the new phenomenon.

What does education look like in a world with generative AI?

The bottom line here is that we, students, universities, faculty, are simply in unchartered waters. I start teaching my digital policy class for the first time since the emergence of generative AI. I'm really unsure about how I should be handling this, but here are a few observations.

First, universities are all over the place on what to do. Policies range from outright bans, to updated citation requirements, to broad and largely unhelpful directives, to simply no policies at all. It's fair to say that a consensus has yet to emerge.

The second challenge is that AI detection software, like the plagiarism software we've used before it, are massively problematic. While there are some tools out there, they all suffer from several, in my view, disqualifying flaws. These tools have a tendency to generate false-positives, and this really matters when we're talking about academic integrity and ultimately plagiarism. What's more, research shows us that the use of these tools leads to an arms race between faculty trying to catch students and students trying to deceive. The other problem though, ironically, is that these tools may be infringing on students' copyright. When student essays are uploaded into these detection software, their writing is then stored and used for future detection. We've seen this same story with earlier generation plagiarism tools, and I personally want nothing to do with it.

Third, I think banning is not only impossible, but pedagogically irresponsible. The reality is that students, like all of us, have access to these tools and are going to use them. So, we need to move away from this idea that students are the problem and start focusing on how educators can improve their teaching instead.

However, I do worry that a key cognitive skillset that we develop at universities of reading and processing information and new ideas and developing ones on top of them is being lost. We need to ensure that our teaching preserves this.

Ultimately, this is going to be about developing new norms in old institutions, and we know that that is hard. We need new norms around trust in academic work, new methods of evaluating our own work and that of our students, teaching new skill sets and abandoning some old ones, and we need new norms for referencing and for acknowledging work. And yes, this means new norms around plagiarism. Plagiarism has been in the news a lot lately, but the status quo in an age of generative AI is simply untenable.

Perhaps I'm a Luddite on this, but I cannot let go of the idea entrenched in me that regardless of how a tool was used for research and developing ideas, that final scholarly products should ultimately be written by people. So, this term, I'm going to try a bunch of things and I'm going to see what works. I'll let you know what I learned. I'm Taylor Owen and thanks for watching.

More from GZERO Media

Walmart’s $350 billion commitment to American manufacturing means two-thirds of the products we buy come straight from our backyard to yours. From New Jersey hot sauce to grills made in Tennessee, Walmart is stocking the shelves with products rooted in local communities. The impact? Over 750,000 American jobs - putting more people to work and keeping communities strong. Learn more here.

People gather at a petrol station in Bamako, Mali, on November 1, 2025, amid ongoing fuel shortages caused by a blockade imposed by al Qaeda-linked insurgents.
REUTERS/Stringer

Mali is on the verge of falling to an Islamist group that has pledged to transform the country into a pre-modern caliphate. The militant group’s momentum has Mali’s neighbors worried.

Last week, Microsoft released the AI Diffusion Report 2025, offering a comprehensive look at how artificial intelligence is spreading across economies, industries, and workforces worldwide. The findings show that AI adoption has reached an inflection point: 68% of enterprises now use AI in at least one function, driving measurable productivity and economic growth. The report also highlights that diffusion is uneven, underscoring the need for greater investment in digital skills, responsible AI governance, and public-private collaboration to ensure the benefits are broadly shared. Read the full report here.

- YouTube

At the 2025 Abu Dhabi Global AI Summit, UNCTAD Secretary-General Rebeca Grynspan warns that without deliberate action, the world’s poorest countries risk exclusion from the AI revolution. “There is no way that trickle down will make the trick,” she tells GZERO Media’s Tony Maciulis. “We have to think about inclusion by design."

- YouTube

In this Global Stage panel recorded live in Abu Dhabi, Becky Anderson (CNN) leads a candid discussion on how to close that gap with Brad Smith (Vice Chair & President, Microsoft), Peng Xiao (CEO, G42), Ian Bremmer (President & Founder, Eurasia Group and GZERO Media), and Baroness Joanna Shields (Executive Chair, Responsible AI Future Foundation).