Generative AI and Academic Misconduct
-
Academic Misconduct
You may be able to avoid some cases of academic misconduct that relate to ChatGPT and other automated writing tools if you start with the following recommendations:
-
Including a policy regarding ChatGPT and other generative AI technology on your course syllabi that is representative of how you’re comfortable seeing your students utilize this technology when working on assignments.
-
Addressing your expectations and boundaries verbally in class at the beginning of the semester and before any assignment where a student might think to utilize AI.
-
Including expectations and boundaries regarding this technology on assignment instructions.
-
Referencing your department’s expectations and boundaries regarding this technology (if a formal statement from your department has been developed and released).
If you suspect that a student has submitted text generated by AI, consider first having a conversation with that student. You might ask questions like:
-
Question 1: "This writing seems a lot different than the writing you’ve submitted for other assessments. Could you tell me about your process for producing this work?" [It may be that they utilized our Writing Center, worked with a peer editor, etc. Or they may have engaged in academic dishonesty knowingly or unknowingly.]
-
Question 2: "I’m concerned that you didn’t produce this work entirely on your own. Is that the case?" [Sometimes a direct question asked with genuine curiosity can spark a conversation that will identify a learning gap or stress point and create a learning opportunity.]
-
Question 3: "I think this essay, or large parts of this essay, were produced using ChatGPT. I think that because _____ [list your reasons]. Is this the case? And if so, how can I help you prepare to re-write this work and/or prepare better for future assignments?" [Sometimes letting the student know that you’re willing to work with them on producing acceptable writing will open the doors for more honest conversation. If the student confirms use of AI technology, you can enforce the penalty outlined in your syllabus and still work with them to salvage the semester.]
While AI generated language detectors are not always fully reliable, they can produce useful evidence for taking a case to the Office of Student Conduct. These detectors will develop over time, and the “best” product out there will change as the technology advances. See the section below for more information. If you have any questions about how to interpret these tools, please contact the Office of Student Conduct.
After reviewing the student’s previous work, having a conversation with the student if you think that could be productive, and reviewing the results of an AI-generated text detector, you may consider reporting the issue to the Office of Student Conduct. The Student Conduct Process at UTC utilizes the Preponderance of the Evidence standard, where we have to prove that something more likely than not occurred. To best support an Honor Code allegation, it is important to communicate your expectations and boundaries clearly, consistently, and as often as possible.
For any suspected violation of the UTC Honor Code, including the use of generative AI, please document your concern and any conversation you have with the Office of Student Conduct by using the electronic reporting form (www.utc.edu/hcreport). All reports are kept on file with the Office of Student Conduct whether it is information that will result in a grade change through the Honor Code Process or if it is held as a informational report to keep in case of future incidents with the same student. When submitting a report, please include:
-
Syllabus statement, presentation slide, or assignment expectations regarding AI text generators
-
A copy of the suspicious work
-
A copy of other writing produced by the student (if you have it).
-
A summary of the incident as well as any conversation you have had with the student regarding the incident.
-
The results from a detection tool for generative AI writing.
-
Generative AI Detectors
As the capabilities of Generative AI evolve, so does the reliance on AI content detectors to attempt to identify AI-generated content. It's important to acknowledge the uneven reliability of these tools. Despite their advanced technology, AI content detectors often produce false positives or fail to detect actual AI-generated content. For instance, a false positive rate of just 4% translates into potentially hundreds or even thousands of inaccurate detections across our campus.
A further complication is the variable reliability of AI content detectors, which can change dramatically from day to day. The type of content being assessed, updates to detectors’ algorithms, and advancements in the algorithms of Generative AI models can all impact their effectiveness. Additionally, even when a tool is reported as highly reliable, the methodologies used to assess its reliability can differ greatly, leading to varying outcomes and effectiveness.
We believe that these problems—stemming from inaccuracies and fluctuating reliability—could lead to undue stress and confusion for both students and faculty. When determining the authenticity of submitted work, we advocate for a more holistic approach that includes critical human evaluation and fostering open conversations with students about academic integrity.
Below are the free detectors we find to be most reliable at this time. Please note that efforts must be made to protect student information and privacy if you do opt to utilize these tools. No identifiable information should ever be entered into one of these tools.