Telltale Signs That Something Has Been Written by ChatGPT
Recently, I came across a LinkedIn post discussing the telltale signs that a piece of writing has been generated by ChatGPT. As the conversation unfolded, with others adding their own observations, I decided to compile and share the insights in this article.
While I cannot find the original post to properly credit the author, several points stuck out and prompted me to dig deeper:
- Use of the Oxford comma
- Dashes without spaces
- Capitalized hashtags
- The rocket ship emoji 🚀
‍
Human Contributions to the List
The LinkedIn community had plenty to say on the matter. The following additions came from those who commented on my post:
- Frequent use of contractions
- Flowery, overly dramatic words like:
- Delve
- Unleash
- Realm
- Titles where “All First Letters Are Capitalized”
- From a non-U.S. perspective: American spelling such as behavior versus behaviour, realize versus realise, labor versus labour, and color versus colour
- Writing that feels “vanilla” or robotic—lacking personality, too polished, or impersonal
- Use of symbols like ## or **
- Emojis beyond the rocket ship
‍
ChatGPT’s Perspective
For fun, I asked ChatGPT for its input on how people spot its work. Here is what it offered:
- An overly balanced and neutral tone
- Excessive politeness
- Structured, formulaic formatting
- Repetition of key points
- Formal sentence construction
- Consistent use of inclusive language
- Predictable, standardized phrases and idioms
- Frequent summarization
- Reliance on AI-related terminology
While the list is quite comprehensive, it is amusing that even AI can somewhat identify its own style, but perhaps not perfectly.
‍
How to Prove Something Was Not Written by ChatGPT
Ironically, while writing this article, I realized that I had originally misspelled “telltale” as “telltail” in my LinkedIn post. A simple mistake like that might be one of the clearest indicators that a human is behind the keyboard, not ChatGPT!
‍
In Conclusion
Recognizing whether content is generated by AI is a growing challenge, but spotting small quirks and stylistic tendencies, both human and machine, can offer clues. From contractions and creative word choices to subtle errors, we can often tell whether a human or AI has taken the lead. And of course, human error, like a misspelled word, can be a dead giveaway.
‍
Link to the Original Post
You can find the original LinkedIn post here. If you have something to add, please do and I will update this article in the future.Â
‍
Kirsty Nunez is the President and Chief Research Strategist at Q2 Insights a research and innovation consulting firm with international reach and offices in San Diego. Q2 Insights specializes in many areas of research and predictive analytics, and actively uses AI products to enhance the speed and quality of insights delivery while still leveraging human researcher expertise and experience.Â
‍