Feb 17th, 2023, 09:00 AM

ChatGPT: Too Good to Be True?

By Caitlin Daly
Image credit: Unsplash / Om siva Prakash
The moral dilemmas of a twenty-first-century student.

A situation every student is all too familiar with: It’s three o’clock in the morning, mere hours before your final term paper is due. No amount of bargaining with the university gods can save you now. You’re desperate for anything. In a sleep-deprived, hazy Google search you stumble upon what may be too good to be true. You have come across a minimalist website entitled ‘ChatGPT.’ The tech jargon of its catchphrase, “optimizing language models for dialogue”, won’t distract you from entering the topic of your paper into the subject box and watching in amazement as an AI program spits back a well-written response that could easily be submitted in place of your own work. 

ChatGPT sounds great when all you want to do is submit your assignment on time, but does this new technology present a facade of perfection? 

Initially released in December 2022, ChatGPT is an artificial intelligence (AI) chatbot meant to emulate the likeness of human responses. Its usages are multi-faceted; it can write text, finish code and sort algorithms. ChatGPT is just one of many AI programs using ‘deep learning’ techniques to generate human answers. 

In high school and university settings, the presence of ChatGPT raises questions about potential infringements upon academic integrity. Educators fear they will be unable to distinguish between student's work and work generated by ChatGPT. However, it is unclear if a student’s submission of material generated by AI programs can be counted as plagiarism because it is technically original work. 

So, how do AUP students feel about adopting new technology into their academic experience?

As a computer science student at AUP, Sophia Laforest is excited about the vast potential that ChatGPT has to offer. She is excited not only for its potential as a learning enhancement tool, but also its ability to advance algorithm sorting and finish code. 

“I think it is a wonderful tool. However, I totally see the potential for abuse in computer science or writing because it will do the work for you,” Laforest said. When asked about the potential harm of ChatGPT and other AI programs on academic integrity, Laforest said, “If a student cheats, they’re going to cheat. I don’t think this will make students cheat.”

Laforest says she uses ChatGPT as a studying tool, using it to generate study guides and practice questions for exams. 

Students are excited about the possibilities that AI programs bring to the academic table, but how do professors feel? Surprisingly, they are not as apprehensive as we would assume them to be.

Professor of literature and director of the AUP teaching and learning center, Geoffrey Gilbert, says that his initial reaction to ChatGPT was fascination and curiosity. 

“For those interested in language, in cognition and in personhood, which is all of us pretty much, it is a fascinating thing,” said Gilbert. For him it raises questions of, “How does this work?” or “What does this say about the way we work?”

Gilbert urges AUP students to use programs like ChatGPT as a reminder to re-examine our perspectives on learning and education. His solution to educators' fears that they will not be able to distinguish their students' work from AI is to really get to know their students. Ideally, educators should listen to their student's ideas, meet with them, hear, understand and recognize their voice.

“It would be such a terrible loss for a student to cede their perception of the world [to a machine], their attempt to find what sort of voice they want to have in the world and debate their voice with other voices,”  Gilbert said. 

According to Gilbert, advancements in technology should be a reminder to educators that they should want students to be engaged in their learning, to see something and really think about it so that they want to do the work themselves, rather than using an outside source to do the work for them. 

Gilbert also sees the value of re-examining our concept of language in relation to AI programs. In relation to the field of language, Gilbert noted “The excitement of working in language is that language itself is a social medium. I don’t get to be sure that I mean what I mean, except in relationship to actual other human beings who will understand me or not. Obviously, we do formalize that social [aspect] within universities in particular ways. We say these are the kinds of rules you should use and that sometimes denies the social relations of meaning. I like the fact that, in response to ChatGPT, we may need to get back to the questions of who we are speaking to, why we are speaking, what we’re doing with our language, how we’re growing in our language.”

Fellow Parisian university, SciencesPo, has recently banned the use of all AI technology by students. Although AUP has not taken these steps yet, there are discussions of an update to academic policy to include artificial intelligence. Further conversations are being held among professors and graduate students to determine the future of ChatGPT at AUP. 

ChatGPT has its benefits, but not in the way we expect. It should not become a tool to override the human experience because, although it is able to recreate almost every aspect of humanity, AI will never be able to recreate the experience of human emotion. 

Instead of fearing it or banning its use completely, it should be seen as an enhancement to the learning experience, as well as a reminder of why we want to learn and how we choose to communicate.