Detecting Artificial Intelligence (AI) Plagiarism

Can Turnitin detect AI-produced writing?

You may have heard that Turnitin released a preview of their AI-detection tool in April 2023. Due to concerns about some of its features, the University is reviewing it in more detail prior to deciding  whether to release it. 


Please be aware that the existing version of Turnitin cannot detect content from ChatGPT or other AI tools. Even if you run a prompt for your writing assignment through an AI tool (e.g., Bard, ChatGPT, or Claude), several times, then upload those to Turnitin, there is still likely to be sufficient variation in the results from one student to the next to prevent Turnitin's Similarity Report from matching any given content generated with that AI.


Additional information and resources from Turnitin on identifying and dealing with AI-based writing are also available.

What is an AI detector?

These are applications that predict the likelihood that writing was created by an AI or a human. Typically, these look at characteristics of the writing, particularly how random it is. Humans tend to use a greater variety of words, have more randomness in their spelling, grammar, and syntax, and generally more complexity than an AI. Some will give a verbal or graphical indication of how strongly it finds the text to be from a human or an AI. Others return results in terms of perplexity (a measure of randomness within a sententce) and burstiness (a measure of randomness between sentences) with scores, graphs, or color coding. Lower perplexity and burstiness scores are more likely to be from an AI, with higher ones pointing toward human authorship. 

We highly discourage the use of ChatGPT or similar AIs (e.g., Bing AI Chat, Bard, Claude) to determine if a paper was written by an AI or human. They produce false results at a very high rate regardless of who or what wrote the paper. They will also produce plausible rationalizations to defend their answers if asked. This is the worst way to determine AI plagiarism. 

Are AI detectors reliable? 

No, at best they are indicative. Published claims to reliability vary greatly, between about 26 and 80% for free ones. In other words, expect them to be wrong between a fifth and three-quarters of the time. Those figures apply to the free detectors already available in early 2023. Turnitin claims a false positive rate of 1% (i.e., 1% of those submissions flagged as containing AI-content will actually have no AI-generated content) and adjusts its threshold for what it flags as AI-created high in order to avoid more false positives. Those figures may be many times higher for non-native writers of English.

It is possible they will improve, but this should be viewed as an arms race rather than a stable situation. For instance, recent advances with ChatGPT (particularly using GPT-4) show that it is possible to coach it to write with more complexity and fluency, making it harder to detect. This is particularly true of students using well-engineered prompts. By prompt engineering, we mean the creation of fully developed questions and instructions (sometimes including data) to elicit the desired kind of results from the AI. 

If you are going to use these tools, we recommend that you check your sample with more than one.

Are there privacy or other issues? 

Free AI detectors have not been vetted by the University. This is strictly a case of use at your own risk for now. You should never feed them any content allowing identification of the student. We also do not know the specifics of if or how they store or use content. The same applies to feeding text from student papers back into an AI for evaluation.  

Many faculty will enter parts of student papers into Google or other search engines to try to find matches, so this may not seem so different to you. You may wish to consider the differences between that and pasting or uploading all (or large parts of) a paper into an application. Beyond privacy, there may be questions of student copyright to consider. 

Because we have a license and agreements with Turnitin that covers FERPA and meet the University's interpretation of student copyright, these considerations do not apply to Turnitin tools. 

What should I look for when reading a paper? 

There are also telltale signs to look for, though, again, as the technology evolves, these may change. This is based on ChatGPT.

  • Look at the complexity and repetitiveness of the writing. AI's are more likely to write less complex sentences and to repeat words and phrases more often.  
  • If the assignment has a bibliography, look for made-up or mixed-up entries. Some will be obviously odd, such as having authors writing an article after their death, or a German writing in English, but publishing in a German publication. One of the simplest things, if it provides URLs (particularly with DOI references), try them. In one test we tried, of eight DOI references, two pointed to other articles and six were nonexistent. Note that it is now possible to force GPT (such as Bing AI or ChatGPT with some browser plugins) to search Google Scholar or PubMed when generating citations. In many cases, this will produce real, plausible articles.  Look for egregious factual errors. At least on some subjects, ChatGPT will insert information that is flatly impossible. While students might do this, in combination with other factors, the errors are often things that are unlikely to be made by a human. Remember, AIs do not understand what they are writing. The phrase "stochastic parrots" is often used to describe them, as they work out what words are most likely to follow other words and string them together.  
  • Look for grammatical, syntax, and spelling errors. These are more likely to be mistakes a human author would make. 
  • If you have a writing sample from the student that you know is authentic, compare the style, usage, etc. to see if they match up or vary considerably.  
  • Does the paper refer directly to or quote the textbook or instructor? The AI is unlikely to have textbook access (yet) or know what is said in class (unless fed that in a prompt). 
  • Does the paper have self-references that refer back to the AI by name or kind? In some cases students have left references to ChatGPT made to itself in the text.  
  • Consider giving ChatGPT your writing prompt and see how it compares to student submissions. 

What are some ways I can structure writing assignments to discourage bad or prohibited uses of AI?

  • Consider requiring students to quote from specific works, such as the textbook or from class notes. AI is unlikely to have access to either, though students might include quotes in the writing prompt. 
  • Add reflective features to the assignment, these could be written or non-written, such as having students discuss live, or record (e.g., VoiceThread, Panopto), on what they have found in their research and reflect on their writing. 
  • Use ChatGPT or other tools as part of the writing process, for instance, brainstorming, but also have students critique the work, consider the ethics of using it, etc.
  • Create a writing assignment with scaffolding (including outlines, rough drafts, annotated bibliographies, incorporating feedback from peer reviews and the instructor, etc.) could help with some aspects of AI. It might not be very effective on the outline or first draft as those could be still be generated by the AI. Asking for an annotated bibliography would, with the current limitations of the software, be something it could do with much success. Peer review and instructor feedback needs to be detailed and substantive. Otherwise, students could give those as further prompts to the AI, generating new versions of the paper in an iterative fashion. 
  • Contact your campus teaching-learning center, writing program, or Missouri Online for ideas and help working out assignments that promote use of AI in positive ways or mitigate possible harms.

What other resources are available? 

Modified on: Tue, Jan 23, 2024 at 11:22 AM

Did you find it helpful?

Can't find the information you need?

Help us improve the site.