Scroll to the top

ChatGPT on campus: How are universities handling generative AI?

ChatGPT on campus: How are universities handling generative AI?
AI on campus: How are universities navigating a new phenomenon? | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, discusses how the emergence of ChatGPT and other generative AI tools have thrown a new dynamic into his teaching practice, and shares his insights into how colleges have attempted to handle the new phenomenon.


What does education look like in a world with generative AI?

The bottom line here is that we, students, universities, faculty, are simply in unchartered waters. I start teaching my digital policy class for the first time since the emergence of generative AI. I'm really unsure about how I should be handling this, but here are a few observations.

First, universities are all over the place on what to do. Policies range from outright bans, to updated citation requirements, to broad and largely unhelpful directives, to simply no policies at all. It's fair to say that a consensus has yet to emerge.

The second challenge is that AI detection software, like the plagiarism software we've used before it, are massively problematic. While there are some tools out there, they all suffer from several, in my view, disqualifying flaws. These tools have a tendency to generate false-positives, and this really matters when we're talking about academic integrity and ultimately plagiarism. What's more, research shows us that the use of these tools leads to an arms race between faculty trying to catch students and students trying to deceive. The other problem though, ironically, is that these tools may be infringing on students' copyright. When student essays are uploaded into these detection software, their writing is then stored and used for future detection. We've seen this same story with earlier generation plagiarism tools, and I personally want nothing to do with it.

Third, I think banning is not only impossible, but pedagogically irresponsible. The reality is that students, like all of us, have access to these tools and are going to use them. So, we need to move away from this idea that students are the problem and start focusing on how educators can improve their teaching instead.

However, I do worry that a key cognitive skillset that we develop at universities of reading and processing information and new ideas and developing ones on top of them is being lost. We need to ensure that our teaching preserves this.

Ultimately, this is going to be about developing new norms in old institutions, and we know that that is hard. We need new norms around trust in academic work, new methods of evaluating our own work and that of our students, teaching new skill sets and abandoning some old ones, and we need new norms for referencing and for acknowledging work. And yes, this means new norms around plagiarism. Plagiarism has been in the news a lot lately, but the status quo in an age of generative AI is simply untenable.

Perhaps I'm a Luddite on this, but I cannot let go of the idea entrenched in me that regardless of how a tool was used for research and developing ideas, that final scholarly products should ultimately be written by people. So, this term, I'm going to try a bunch of things and I'm going to see what works. I'll let you know what I learned. I'm Taylor Owen and thanks for watching.

GZEROMEDIA

Subscribe to GZERO's daily newsletter