If AI Misconduct is Suspected
Suspected misuse of generative AI can be a sensitive and uncertain area. While academic integrity must be upheld, students are still learning the boundaries of appropriate AI use—and many may not yet recognize it as misconduct. The best way to respond begins long before the issue arises, with proactive steps early in the semester and a thoughtful, student-centered approach if concerns emerge.
-
Early in the Semester: Set the Stage
-
Get to know your students—their stories, their writing style, and how they approach work in your class.
Familiarity with students’ voices makes it easier to notice when a submission seems out of character. -
Collect brief writing samples through low-stakes assignments, either in class or online.
These early pieces help establish a baseline. Low-stakes work is also less likely to be AI-generated. -
Clearly define your AI expectations. Share your policy on AI tool use in your syllabus and assignment guidelines. Transparency helps students understand boundaries and reduces confusion.
-
Retain copies of student work from the start of the course to use for comparison if concerns arise
-
-
When You Suspect Misuse
- Don’t rely solely on AI detection scores.
These tools have high false positive rates and are not reliable as standalone evidence. Some detection tools also show bias against multilingual or non-native English speakers. A score may raise a flag—but it is not proof. -
Compare with prior work.
Look for differences in vocabulary, tone, sentence structure, or depth of thinking. If the new submission feels overly generic or inconsistent, note specific examples.
- Don’t rely solely on AI detection scores.
-
Meeting with the Student
- Request a conversation.
Frame the meeting as a check-in rather than an accusation. Approach with curiosity and assume good intent. -
Begin with questions.
Instead of saying their work is suspect, ask for clarification about “muddy points” in their submission. You might say:
“I found this section intriguing—can you tell me more about what you meant here?”
Or:
“This is a complex idea. Can you walk me through how you arrived at it?” -
Assess understanding.
If the student can explain their ideas clearly and with confidence, their work may well be genuine. Proceed with good judgment. -
If concerns persist, share your observations.
If the student struggles to explain the work or seems unfamiliar with its content, raise your concerns gently. Show examples of how this writing differs from earlier work. Invite the student to reflect on why this might be the case. -
Use AI detection scores sparingly and cautiously.
They may help support your case, but should not be your first or only piece of evidence. -
If the student admits using AI improperly, you are within your rights to apply academic consequences, as appropriate under your institutional policy.
However, consider approaching the situation with grace. Many students may not fully understand that their use of AI crossed a line. This can be a teachable moment. You might offer an opportunity to revise the assignment or re-submit work under supervision—while still documenting the incident as required. -
If the student denies misconduct, but you have strong reason to believe it occurred, explain your concerns and your regret about the consequences that must follow. Document the evidence thoroughly in case of a grade appeal.
- Request a conversation.
-
Notify your Chair and other relevant administrators.
Notify your department chair and other appropriate administrators. Keeping others informed ensures consistency and support in handling the situation.
-
References and Resources
- ChatGPT. OpenAI, 22 July 2025, chat.openai.com/chat. Assistance with revising, synthesizing sources, and editing of this page.
- AI detectors: An ethical minefield
- Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector
- August 16, 2023 | Michael Coley | Vanderbilt University
Have other suggestions? Email us at teaching@etsu.edu so we can consider including them!