What concerns involving AI is the Office of Student Conduct seeing?
On the academic integrity side, there are a couple of issues. One, it is inherently a violation of the University honor code to use AI without permission. Two, if a student uses AI and cites it, just as they would an encyclopedia, that is not a violation; but none of those using AI are doing that. In part, that’s probably because most of them used it to give them a basis to start from, but some are using it because they want it to do the work for them.
Can you describe the kinds of problem AI usage being reviewed?
At another institution, a student was using Grammarly, which people have used for a long time for basic editing and proofreading features, but now Grammarly also has an AI function. This student copied and pasted their original work onto the Grammarly platform and the suggested edits included AI-generated content. When the professor put the student’s paper into an AI detector, it indicated the use of AI-generated content. The student insisted they were not using the Grammarly AI function, but the detector found that some kind of AI tool generated content and the student didn’t say what other kind of tool might have been involved.
The issues range from using AI to write an essay to using it to solve some types of math equations. As for math, I think math instructors are probably the least-worried faculty in this matter, because math teaching has been dealing with this for years. There’s been so many problems from so many tools for doing math that instructors have already learned to adapt much more than other subject areas have. To them, this is really nothing new. The availability of tools that can just write papers for students is much more recent.
What tools are available for detecting the use of AI in student work?
We used Turn It In on campus last fall semester because it is our plagiarism checker, and they offered a free AI detection tool. This semester we deactivated it, in part, because they started charging for the AI detection. Plus, there are some concerns from the Walker Center (for Teaching and Learning) and from other institutions about its reliability.
GPT Zero and Copy Leaks are the two we’re recommending. They work very similarly to what faculty were used to with Turn It In, and both are free for basic things. However, Turn It In was built into Canvas, and an AI review of work was automatic as long as faculty had that feature activated, but GPT Zero and Copy Leaks aren’t incorporated into Canvas. Both are still going to work, but using them just won’t be as convenient.
Are these tools perfect? No, because nothing’s perfect. AI can hide what it’s doing. There can be false detections. Some tools claim fewer instances of false detections than others. It’s a still-evolving area. I’ve worked with the Walker Center on making sure that we tell faculty these are the tools that we recommend, even though they’re not built into Canvas.
How do faculty adjust to the AI reality?
One of the things that we’re actively working on with the Walker Center is telling faculty that AI is just the way of the world now, and we must adapt. Most faculty aren’t there yet; and even I think when people are, they’re still going to want to use tools just like they use for plagiarism detection to make sure that a student’s work is somewhat original.
If a professor has a test bank, for example, there has to be more than just one option as the answers or more than one set of questions. Rotating questions on tests for students is the most effective. If you have only one test prepared and you repeat it every semester, even pre-technology, you risked it being available to students through some bank of tests students might have kept.
It’s the same with papers, even though we’ve had faculty members say, ‘Well, I’ve been an English instructor, a communications instructor for years. I know I’d be to tell if a student’s work wasn’t written by the student.’ The reality is, the tools are smart enough to make it seem like a person did the writing. Now, again, if you have multiple pieces of the student’s work and, unless they became an amazing or much different writer overnight, you may be able to tell there’s a level of difference—but you have to be looking for that. Ultimately, students using AI to do their work for them is not any different than age-old contract cheating, where a student had someone write a paper for them or do their research for them.
We’ve told faculty, ‘If you want to allow AI use completely in your class, that’s fine. You just have to tell your students it’s OK.’
What’s ahead for your work in this area?
AI is where our society is continuing to go. We will never get away from it. We just have to keep teaching folks how to work better with it and adapt.
One of the things we are going to do is update the student ethics and integrity course we created a few years ago. Students have to take it just like they take the Title IX course. We’re one of a very few schools—a couple of others have started since we created ours—that have created an ethics and integrity course for all students. That’s definitely an area that we want to continue focusing on and can be a leader in.
The use of AI is one of the things we’ll focus on in updating the course, because it’s something our students may not have considered or talked about before, and it’s probably one of the biggest things our students will face, now and from now on.
–Brett Fuchs is UTC associate dean of students and director of the Office of Student Conduct and of Student Outreach & Support; chair of the CARE team and deputy Title IX coordinator.