Categories
Students on lawn outside of Main building at Miss Porter's School

Implications of AI and Ethics in Academic Settings

By: Timothy Quinn

Chief Academic Officer and Dean of Faculty

Since the release of ChatGPT just under a year ago, educators everywhere have been in a state of panic. This has primarily been because of concerns over how easy it makes it for students to cheat or plagiarize. However, regardless of the tools at their disposal, we know that students cheat when they are under stress, do not have the support needed to do their best work, or feel the assignments are irrelevant.   Thus, while Generative Artificial Intelligence (AI) certainly makes cheating easier to do and harder to detect and prove, the same solutions still apply. 

Writing can be done in class, and it is not the only thing we can assign. Students can complete projects, design experiments, engage in debates or dialogue, produce artifacts, and give speeches or performances. All of these often require the same thinking skills we are assessing when we assign an extended essay. In short, we can ask students to demonstrate evidence of their learning in ways that are directly observable.  And when we do ask students to write outside of class, we should assign them topics that matter to them and that they have some expertise in and knowledge about.  In general, we should endeavor to make assessments engaging and relevant; dare I say we should even try to make them fun.

The other thing we teachers can do is get to know our students. Know how they think, how they write, and what they care about. If we do that, we will not only be more able to detect the presence of AI in their writing, we will also have formed relationships that we can leverage to get students to produce their own work. Don’t underestimate the power of letting students know that you genuinely care about what they think. That is a tremendously affirming and motivating message for an adolescent to receive from a teacher. 

Some students may still cheat, but appealing to the better angels of their nature will go a lot further than assuming they’re all cheaters. I highly recommend the approach outlined by a mentor of mine, Dr. Jonathan Zimmerman, who explained his classroom advice in a recent piece in the Washington Post titled “Here’s My AI Policy for Students: I Don’t Have One”: 

You might even ace your classes. But you will never know what you really believe. You will become the kind of person who is adept at spouting memes and clichés. … If you ask an AI bot to do it instead, you are cheating yourself. You are missing out on the chance to decide what kind of life is worth living and how you are going to live it.

This AI thing is your call, and it’s your life. I can’t live it for you. Maybe, as the futurists insist, AI will eventually take over everything we do. It will drive our cars, design our buildings, cure our illnesses. It will make beautiful art and music. It will end world hunger and poverty. Yet there’s one thing it will never do: make you into a fully autonomous human being, with your own ideas, feelings and goals. I want that to be your ambition. And if that’s what you want, too, then avoid the bots.

girl's hands holding a phone above a laptop

Despite all of this concern about cheating, when I think about the potential impact of AI on our entire civilization, on the future of human existence, focusing on whether there’s an uptick in plagiarism does not seem like the most urgent thing with which to be concerned. What we should focus on instead is this: How do we teach students about and prepare them to manage the philosophical and ethical implications of the impact AI will have on the world? Students need to consider when to use AI, why to use it, and—perhaps more significantly—when and why not to use it. Making these determinations requires them to wrestle with questions of what it even means to use this technology, and how doing so changes the nature of what it means to be human and our place within the world.  Most importantly, people will need to weigh the potential benefit, which I acknowledge is enormous, with drawbacks both concrete in terms of harm that could be done, and philosophical in terms of the relegation and devaluation of the human experience.  

Humans tend to think that technological innovation is always both good and inevitable.  Neither of these is necessarily the case.  Some technology has had an overall negative impact on the world, and in terms of the inevitability of certain developments, both individually and collectively, we have the opportunity to make choices (at least for now) about what developments to pursue and how to regulate them.  Thus, students need to be educated and empowered to make responsible, ethical choices in their individual lives and as the future leaders they will become.  

Schools embraced the internet and all that it has brought, often without realizing that students are not prepared to navigate this digital world, and so now, years too late, we are beginning to teach media literacy for the internet age and to put restrictions on social media.  We should embrace and use AI when it is appropriate, but let’s not wait too long to facilitate student learning about the risks as well as the potential reward.  In doing so, we would be the ones cheating – cheating students out of the ability to control their futures.

For those interested in two wonderful podcasts on the topic check out the following two episodes of my favorite podcast, One Being:

MISS PORTER'S SCHOOL

Be BOLD for girls' education this Farmington Give Day with your gift to the MPS Annual Fund!