Are You Chatting With a GPT Bot?

This article aims to teach you several popular tricks and techniques to detect when you're chatting with a social media bot, dating site bot, or help desk bot, especially when there's no warning and the bot is trying to impersonate a human. I'm specifically referring to chatbots like ChatGPT, not just any bot.
Artificial intelligence solves many of our problems and often lets us accomplish tasks faster, more productively, and better than before. However, AI also poses a serious threat in many ways, and I'm not just talking about jobs. It makes a lot of mistakes, often provides false information, wastes our time, undermines our common sense, and limits our ability to research and draw our own conclusions.
AI also threatens privacy and security, both individually and collectively, because it can be used to create fake news, photos, and videos like never before. Finally, and this is the main topic of this article, AI can perfectly simulate real people, impersonating them in chats, social media, dating sites, support services, and much more. While services usually disclose when they use AI, providers aren't always honest, sometimes passing off AI-generated personas as human. This is common on social media (fake followers, likes, etc.), but also in chats and on dating sites. We therefore want to alert you to this danger and provide you with effective tools to help you detect when you're chatting with an AI or a real person.
We don't claim to offer a perfect, always-working solution, but I'm confident that after reading this article, you'll be much better at determining whether you're being deceived or treated honestly.
How do chatbots work?
Despite the image at the beginning of this article — a futuristic robot holding a cell phone — chatbots aren't so impressive in reality. They're simply computer programs, usually written in Python or other languages, that generate messages and conversations in a typically human way. They do this by statistically analyzing billions of texts and conversations. Furthermore, using complex mathematical models, they mimic the functioning of human neurons to generate thoughts and ideas.
However, they're not perfect copies of us. Despite the imitation, they remain computer programs, usually trained to respond to specific topics. Chatbots clearly don't always need the generative power of ChatGPT, which is a more general model and therefore requires significantly more processing power and data. There also are smaller chatbots, and while any GPT program can be detected by its response patterns, simpler chatbots are even easier to spot because they're trained exclusively on a specific subject. For instance, the products and services offered by an influencer or marketing specialist could easily be presented by an AI trained on a personal computer due to the limited context of its area of expertise.
Here are some typical signs that reveal you're talking to an AI pretending to be human:
- Limited Conversational Scope: Try to steer the conversation to a more complex or different topic. An AI will quickly indicate it doesn't understand, or—if it's a more advanced AI—it won't respond at all without any obvious reason.
- Unresponsive to Provocative Tones: If you try a more provocative or direct tone, an AI will likely stick to a polite and politically correct response. While a trained customer service rep might also do this, they'll usually try to change the subject or transfer you. An AI will remain on its pre-programmed track.
- Inflexible Service Offers: If it's offering a service, try a counter-offer. For example, if it's pitching social media management, ask about business partnerships. If it keeps pushing the same services and seems unable to deviate, it's likely an AI.
- Topic Drift: In a general chat, if the AI shifts the conversation to a different topic without showing it understands your previous point, it's probably trapped within its training parameters.
Here are some typical signs that reveal you're talking to an AI pretending to be human:
- Instant, Lengthy Responses: It responds instantly with large amounts of text, as if pasting responses or typing faster than 106 words per minute (average human speaking speed is 106-109 wpm).
- Literal Interpretation: It takes everything literally, often to an exaggerated, almost grotesque degree (though this can be a trait of some autistic individuals).
- Vague, Generic Answers: It provides answers vaguely related to your question, sometimes repeating your words to simulate attention—an old trick.
- Unwavering Availability: It's always available, no matter what time you contact it, and consistently gives the same answers to repeated questions with minimal variation.
- Date Query: This trick is very known and discussed in social media. Ask it for today's date. An overly formal response like Today is Thursday, October 26, 2023, without any surprise or hesitation, is a strong indicator. A human would likely respond less rigidly, even if they were trying to be helpful.
Finally, here are some more general signs:
- Consistent Response Time: More advanced models might use a consistent delay between receiving a message and responding. A variable or random delay is less common.
- Uniform Response Complexity: AI responses tend to maintain a consistent level of complexity and structure, whereas human responses vary in style, rhythm, and complexity.
- Errors and Repetition: While AI responses may be error-free, they might still exhibit syntactic or semantic errors. Perfect spelling can also be a sign of artificiality.
- Lack of Cultural/Emotional Nuance: AI often misses cultural, emotional, or contextual subtleties.
- Being very direct: Asking direct questions about personal opinions or very specific experiences can reveal who you're interacting with. If you say something absurd and get a response as if it were perfectly normal, you're likely talking to an AI.
Clearly, interacting with an AI isn't always bad. Sometimes, a response from an AI specifically trained to solve a problem in a particular context is much better and faster than waiting for a human agent. But it all depends on the context, and generally, you as the user should always be informed and never allow anyone to deceive you by pretending to be someone they're not.
More or Less AI?
What we've discussed up to now brings up an obvious question: Is AI ultimately positive or negative? Well, it's inevitable that human progress would eventually reach this point. Throughout history, whenever a new technology has emerged, some jobs have been lost, and new ones have been created. Hopefully, this will also happen with AI. Without buying into Terminator-style myths about AI enslaving us, we must accept progress for what it is because we can't fight reality.
However, it's crucial to enact strict laws and regulations to prevent the inevitable abuses of AI. For example, we need laws requiring users to be informed if they're interacting with a machine, prohibiting the creation of fake news, and mandating the clear labeling of AI-generated content.
Above all, language model generators like ChatGPT should be required to incorporate patterns that allow both humans and other machines to unequivocally identify human-generated and AI-generated content. Such laws would help limit abuses and highlight the advantages AI offers. Only then can this recent and highly controversial innovation be considered truly beneficial for progress and productivity.