Since ChatGPT came on the scene, generative AI has been a subject of controversy. Students across all grades have used generative AI to write essays. In response, teachers have expressed concern and frustration. They are concerned that this will cause their students to lose writing skills and are frustrated over having to check papers for AI-generated content. Some students and teachers have expressed that AI use in education can be positive and can help students come up with ideas or outlines for their writing.
“I personally do not use AI in my own schoolwork,” senior and English major Emma Bare said. “I know maybe one or two students within the department who have discussed it and how they’ve toyed around with it, but I’m not sure how useful they’ve found it in their writing or other schoolwork.”
Some students say they do get some use out of generative AI.
“While I will never submit any work done by AI, things it helps me with include answering questions about how a certain type of code works, like what does an error mean,” sophomore student and computer science major Aiden Crossler said. “When I am writing a professional email or something like a cover letter, I ask ChatGPT for advice on what I should include and use that as a starting block.”
Evidently, there is variety in if and how students use generative AI. Some have completely sworn it off, but some believe that is useful at times. There is also variety in how students feel about it.
“I believe that AI is a great tool that needs to be learned to be used to help with learning, not replace it,” Crossler said. “I know of others who use it as well to help them with their work, and they are able to get more done and get a better understanding faster than those who don’t use it.”
So, some students view it and its use positively. However, some students are a bit more skeptical.
“I’m sure there are ways to use it that can be beneficial, but I am skeptical of how to keep its use contained to those beneficial areas without it leaking into aspects of education where it can begin to cause harm to students,” Bare said. “Once it is used to generate ideas, rather than as a tool for students to organize their own ideas, then I believe we have run into an issue of it overstepping into areas where academic integrity is called into question.”
This calls to mind the university’s policy on academic integrity, which mentions generative AI.
“Unauthorized use of generative AI tools during the preparation and/or submission of academic work is prohibited and is considered a violation of this policy,” the policy states. “Faculty should set clear guidelines in cases where the use of generative AI is encouraged or assigned and students should seek clarification if they are uncertain as to the scope of allowable use.”
Many teachers cite this, alongside other Linfield policies, in their syllabi. One professor had a unique approach to his classroom’s AI policy.
“In my INQS class, I asked the students to write a persuasive essay on the policies professors should adopt in the face of generative AI tools, an idea I got from Professor David Sumner in the English department,” said Environmental Studies professor Kurt Ingleman. “The twist was that I left my own policy open-ended, allowing the students to help shape our class policy.”
The semester is still in its waxing months, so it is yet to be seen how this will play out.
“While the responses varied, no students argued for unrestricted use and many articulated wariness about the effects of AI on their learning,” said Ingleman.
Is generative AI an innovative tool for the future? Is it harmful to students’ learning and integrity? Only time will tell. Questions like these do not always have black-and-white answers, and once a new technology gains popularity, it is likely to stick around. So, whether you are a fan of generative AI or not, it is likely here to stay. The important thing will be relegating its use away from breaches of academic integrity.