This is one of those conversations that everyone hopes someone else will handle.
AI is now firmly part of the academic landscape, and our students are experimenting with it in all kinds of ways. Some of those ways are thoughtful and productive. Others… well, let’s just say they’re more “shortcut” than “scholarship.” When that happens, our job isn’t to play detective or FBI agent. It’s to guide students back toward integrity, learning, and actual growth. That is, after all, what they are here for in the long run.
When you suspect AI misuse, it’s natural to feel a little unsure about the next steps. These moments call for calm, clarity, and a steady hand, not an interrogation lamp or a lie detector test. The framework outlined below offers a thoughtful path forward that supports your students, your course, and your sanity.
Start with Humanity
When academic integrity comes into question, start from empathy. Most students aren’t out to deceive you. They’re far more likely to be tripped up by stress, confusion, poor planning, or the classic “I thought this was allowed.” When you approach them as someone who needs guidance rather than discipline, the mood of this tough conversation shifts entirely. A calm, respectful exchange can take a shaky choice and turn it into a genuine moment of learning.
Be Prepared Before You Reach Out
Before you meet, take a breath and review the work carefully. Note anything that stands out: unusual phrasing, a sudden shift in voice, or a level of polish that makes you wonder if the student quietly became a PhD candidate overnight. These details will help you anchor the conversation with the student.
A quick note from the institutional side of things: UD does not endorse using AI-detection tools as evidence of misconduct. They’re inconsistent, prone to false positives, and simply not reliable enough to support an accusation. Think of them as the academic equivalent of a long-range weather forecast. Sure, these tools may occasionally offer a hint, but they’re nowhere near reliable enough to stake a conversation on—let alone an accusation. Using them as proof puts far more than the meeting at risk. It jeopardizes the student’s academic record and can undermine the instructor’s own credibility in the process. An allegation of misconduct is a serious matter, and it deserves better evidence than what any AI-detection tool can provide. Besides, your goal isn’t to build a case against your student. It’s to respect the student enough to have a fair, informed, and genuinely productive discussion.
Be Direct but Gentle
When you talk with the student, keep the tone clear, humane, and low-pressure. A few minutes is usually enough. You might start by explaining that their submission feels different from their past work and that you’d like to hear about their process. Ask questions that invite honesty instead of defensiveness.
-
“How did you approach the assignment?”
-
“What part was hardest?”
-
“How did you gather your sources?”
-
“Tell me about [insert suspiciously brilliant and flawlessly written analysis here].”
These open questions often give students space to admit uncertainty, reveal any misunderstandings, or recognize for themselves where things went sideways.
In my experience, students will usually fess up right around this point. I’ve been reminded on more than one occasion that most of our students are dealing with a lot of stressors and are doing the best they can to keep their heads above water. They don’t want to get in trouble. They are not deceitful in intent. This is where kindness can go a long way.
Be Patient and Listen Before You Judge
Give the student time to explain. You may hear a confession, a misunderstanding, or a heartfelt “I honestly didn’t know.” If the student acknowledges inappropriate AI use, thank them for their honesty and explain the next steps calmly. Maybe it leads to a reduced grade, a revision opportunity, or a formal report. At UD you have a lot of discretion in how you handle this moment.
If the student denies using AI and you don’t have concrete evidence, pressing harder isn’t helpful. But you can return to the student’s submission and grade it with full rigor. AI-generated writing may look polished, but it often falls short where it matters, in terms of depth, relevance, accuracy, and alignment with the assignment instructions. Evaluate the submission strictly according to your rubric and the expectations laid out for the assignment.
Take a moment to consider whether your assignments as written are still the best way to measure the learning you want. If it’s too easy for AI to produce something passable, it may be time to revise it. The Center for Online Learning is available to help faculty redesign assessments so they better support integrity and genuine student learning.
Be Consistent with Policy and Your Own Discretion
You should have a clearly defined AI policy already outlined in your syllabus. (If you don’t, consider this your sign to do that today.) Refer your student back to the syllabus (students love it when we do that). This will help anchor the conversation in shared expectations that have already been reviewed in class rather than personal judgment. Reinforce that your goal is not to punish but to teach: understanding responsible AI use is part of preparing for real professional contexts. If a penalty is required, apply it evenly and transparently. Students notice fairness, and they remember it.
Be clear. Be fair. Be consistent.
Be Proactive and Strengthen Prevention
The most effective way to reduce AI misuse is through solid instructional design. Clear expectations go a long way. So do assignments that emphasize process, reflection, and personal insight. Check out the following blogs on ideas regarding integrating or inhibiting AI use in your class for additional assistance and ideas.
Talk openly with students about which types of AI use are appropriate and which are not, and why those boundaries matter. Consider practicing ethical AI use in tough situations in class using this sample case study (Feel free to make a copy of this document and adapt it to fit your needs and subject matter. But, please make a copy first.). If students understand the purpose behind a rule or class policy, they’re far more likely to follow it. And remind them: AI is a tool to support learning, not a substitute for it.
Final Thought
Suspected AI misuse is never the conversation anyone looks forward to, but it doesn’t have to turn into a standoff. With preparation, patience, and a little kindness, these moments can reinforce academic integrity, strengthen trust, and model the thoughtful, responsible engagement we hope students carry with them long after they leave our courses.
— Written by Sarah Tangeman, Instructional Designer at the University of Dayton