In November 2022, the world witnessed a monumental shift in its perception of Artificial Intelligence with Open AI’s launch of Chat Generative Pre-trained Transformer, commonly known as ChatGPT. This generative text chatbot unlocked a new era of AI with its capacity for user interaction, making it a lifeline for many students swamped in their coursework. The dynamic platform allows users to pose follow-up questions, specify desired responses and translate informal thoughts into words in a way unseen before.
However, the release of ChatGPT, while a seemingly sudden awakening of AI, was merely a notable development in the ongoing evolution of Artificial Intelligence. Dr. Joshua Eckroth, an Associate Professor of Computer Science at Stetson University with a focus in Artificial Intelligence, attests to AI’s subtle yet profound influence in our daily routines. As he aptly puts it, “When I hold my phone to my face and it recognizes me and unlocks my phone, that’s AI. Text actions in iMessage, GPS routing, lane assist in your car, or smart thermostats are all actually algorithms that came out of an AI.”
With the progression of Artificial Intelligence, these once astonishing innovations have seamlessly permeated our daily lives. Dr. Eckroth observes that “AI is embedded in all the things we do – it’s just interesting that now, because of this dramatic increase in performance and compatibility, everybody is talking about AI as if it hasn’t been there the whole time.” This insight prompts those working in higher education to wonder whether ChatGPT will follow a similar pattern of integration as other AI algorithms. Could it seamlessly embed itself into academia, despite its limitations and the looming concerns of academic integrity?
AI in the Classroom
ChatGPT’s introduction exposed the valuable role that Artificial Intelligence can have in classrooms. Dr. Chris Jimenez, a professor of global literature and representations of catastrophe, recognizes the evolving dynamics of academics, stating, “I’m having, as a professor, to design assignments that are cheating proof.” To meet this challenge, Dr. Jimenez redesigned his approach to writing assignments by shifting away from traditional means of student production. He initially generates a 1500-word essay using ChatGPT, and then tasks his students to “pare it down to 1000 words, and also incorporate quotes and evidence.” This approach encourages students to transform pre-existing content into their own words and traces the differences in individual ideas and AI generation. Dr. Jimenez further emphasizes, “What I really want to know is how students can turn that into their own writing… and, from the student perspective, it should be about your ability to demonstrate how brilliant you are.”
Similarly, Dr. Melinda Hall, a Philosophy Professor with a specialty in bioethics, also attempts to find a balance between individual thought and AI generation. She suggests a collaborative approach, conveying to her students in a recent assignment that “their writing had to be their own, but that they should have a conversation with ChatGPT before writing to brainstorm.” Dr. Hall’s approach acknowledges the creative potential of AI in brainstorming sessions, while also encouraging students to lean more heavily on their own knowledge.
Dr. Eckroth has also shifted the grading style in his Computer Science classes, where AI-powered tools like Copilot, a generative code predictor, play an important role. He tells his students, “You can use anything you want – I even encourage using these kinds of tools.” Additionally, when asked about the role of these AI-powered tools in computer science education as a whole, Dr. Eckroth described the benefit of students familiarizing themselves with these resources. He states, “it’s going to make [students] more efficient on the job, and they’re probably going to be expected to know how to use it anyways.” He also acknowledges the level of proficiency that is required to operate these tools effectively, especially among code generators that “just see a narrow situation and dump out its best attempt at solving the problem within a narrow context.”
Because of AI’s limited contextual understanding, Dr. Eckroth suggests that the widespread use of these tools may not result in the reduction of learning that many fear, but rather accentuate a different type of learning that prioritizes information literacy. He explains the importance of filtering while using generative AI, “You don’t just take the suggestion and move forward, you have to critically read it, meaning the user is forced to constantly respond and critique as the tools provide suggestions.” This emphasis on critical reading and active engagement in the learning process may result in a shift away from the traditional production-based demonstration of learning, like we are already seeing from professors such as Dr. Jimenez and Dr. Hall. Instead of assignments being focused on how well a student can reiterate information that was given to them, with the addition of generative AI, assignments may now test how well a student is able to ask the right questions in its prompts, refine generated material, and think beyond the surface-level contextual capacity of AI.
The increased adoption of AI tools by some professors at Stetson reflects the changing landscape of education, where AI is now viewed as a partner in the learning process rather than a replacement. This shift recognizes the advantage of using technology in the learning process, while also preserving the critical nature of students. With each innovative approach and experiment, educators like Dr. Eckroth, Dr. Hall and Dr. Jimenez are paving the way for a future where students are acquainted with modern tools, enabling them to think beyond the capacities of AI.
Concerns of AI
Although many support the adoption of Artificial Intelligence, significant concerns loom over what this could mean for the environment, academic integrity and the learning process. Dr. Hannah Markley, an assistant professor of transatlantic nineteenth-century literature, criticizes the threat that AI usage poses to the process of learning. They emphasize that “artificial intelligence learned from humans, but it did not learn to be human from humans. And more importantly, whatever learning is, it is not clear to me that’s what artificial intelligence does.” While Dr. Markley’s perspective reflects skepticism among some professors at Stetson; it also highlights the ongoing debate surrounding AI’s integration in education.
Dr. Markley’s commentary highlights a fundamental question about AI’s alignment with traditional notions of learning. They elaborate that AI operates under the assumption that ‘learning’ is something quantifiable and performative, while in reality, human learning defies such categorization. “The context window for understanding what learning is is a human life,” Dr. Markley adds, suggesting that AI drastically falls short in bypassing the learning experience.
Regarding writing assignments, Dr. Markley suggests that “Writing is not about proving you know something; writing is about figuring out what you think, and I cannot tell you what you think, and neither can AI.” Thus, while students may use AI to speed up and perfect the writing process, they are as a result cheating themselves out of the introspection that successful writing requires. Dr. Markley also expresses the importance of the imperfection of writing, stating, “I want to see the broken pieces, I want to see the places where you’re struggling to say something. And so, I think that is one of the things that gets lost in the rush to integrate these tools that supposedly makes work easier and faster.”
Additionally, the environmental repercussions of AI usage mark another significant concern. Research conducted by Emma Strubell, Ananya Ganeshat and Andrew McCallum at the University of Massachusetts Amherst reveals alarming figures. The training process of just one AI model emits more than 626,000 pounds of carbon dioxide, which is equivalent to the amount of greenhouse gas emissions produced by 62.6 gasoline-powered passenger vehicles driven for a full year. Dr. Hall speaks to this concern, stressing, “if we’re going to tell students to use [AI] all the time as part of their Stetson degree, we better have a sense of what that footprint is and publish it. It is important to remember that if we were to lean all the way in, we would be supporting a bunch of systems that are not in line with our values, so we should be sure to not lean in too hard.” Dr. Hall advocates for the utilization of AI while remaining aware of its limitations, acknowledging that “Our choices always carry consequences, and we must be mindful in our decision-making.”
Embracing the Unknown while Remaining Critical
In the evolving landscape of higher education, institutions stand at a crossroads, facing the rush to embrace new technologies while also preserving long-honored traditions of learning. When asked about the path forward for higher institutions, Dr. Hall emphasizes an approach rooted in the trust of students, stating, “I definitely do not think outlawing AI and policing our students is the way to go.” Similarly, Dr. Eckroth’s hope for Stetson and other educational institutions is a progressive approach to AI integration. He states “I just hope Stetson can be perceived by the other schools as forward thinking, innovative, and helpful. This is what students are looking for, and this is what is going to set them up for the future they are entering into, and not just teach them things from the past.” Dr. Jimenez, guided by his position as a humanist, emphasizes the essential role of humans in the next era of AI. He states, “I believe there has to be a role for humans in this future that is to come, but one of the challenges is that AI is going to make us up our game, or else we will fall behind.” Therefore, the rise of AI does not risk human achievement, but rather raises the bar. Humans must elevate their efforts and intellectual capabilities to work in tandem with the advancements of AI, or else they risk falling short of exponentially increasing language models.
Higher learning institutions have long been places for exploration of the unknown and finding ways to tackle insurmountable challenges. Dr. Jimenez applies this to the conversation of AI: “AI could be used to either exacerbate our fears, or encourage us with a new kind of bravery to face things that we have never faced before.” The journey forward with artificial intelligence, while plagued with many challenges, is also brimming with opportunities for growth, creativity, knowledge and unforeseen accomplishments. As Dr. Jimenez inspires, “Institutions of higher learning are all about bravery. We are about braving the unknown. We’re about braving our own ignorance and surpassing it.”