Back To Meetero Blog

Are You Chatting With a GPT Bot?

This article aims to teach you several popular tricks and techniques to detect when you're chatting with a social media bot, dating site bot, or help desk bot, especially when there's no warning and the bot is trying to impersonate a human. I'm specifically referring to chatbots like ChatGPT, not just any bot.

Artificial intelligence solves many of our problems and often lets us accomplish tasks faster, more productively, and better than before. However, AI also poses a serious threat in many ways, and I'm not just talking about jobs. It makes a lot of mistakes, often provides false information, wastes our time, undermines our common sense, and limits our ability to research and draw our own conclusions.

AI also threatens privacy and security, both individually and collectively, because it can be used to create fake news, photos, and videos like never before. Finally, and this is the main topic of this article, AI can perfectly simulate real people, impersonating them in chats, social media, dating sites, support services, and much more. While services usually disclose when they use AI, providers aren't always honest, sometimes passing off AI-generated personas as human. This is common on social media (fake followers, likes, etc.), but also in chats and on dating sites. We therefore want to alert you to this danger and provide you with effective tools to help you detect when you're chatting with an AI or a real person.

We don't claim to offer a perfect, always-working solution, but I'm confident that after reading this article, you'll be much better at determining whether you're being deceived or treated honestly.

How do chatbots work?

Despite the image at the beginning of this article — a futuristic robot holding a cell phone — chatbots aren't so impressive in reality. They're simply computer programs, usually written in Python or other languages, that generate messages and conversations in a typically human way. They do this by statistically analyzing billions of texts and conversations. Furthermore, using complex mathematical models, they mimic the functioning of human neurons to generate thoughts and ideas.

However, they're not perfect copies of us. Despite the imitation, they remain computer programs, usually trained to respond to specific topics. Chatbots clearly don't always need the generative power of ChatGPT, which is a more general model and therefore requires significantly more processing power and data. There also are smaller chatbots, and while any GPT program can be detected by its response patterns, simpler chatbots are even easier to spot because they're trained exclusively on a specific subject. For instance, the products and services offered by an influencer or marketing specialist could easily be presented by an AI trained on a personal computer due to the limited context of its area of expertise.

Here are some typical signs that reveal you're talking to an AI pretending to be human:

Here are some typical signs that reveal you're talking to an AI pretending to be human:

Finally, here are some more general signs:

× Chat with AI example

An Example in English about chat with AI

Clearly, interacting with an AI isn't always bad. Sometimes, a response from an AI specifically trained to solve a problem in a particular context is much better and faster than waiting for a human agent. But it all depends on the context, and generally, you as the user should always be informed and never allow anyone to deceive you by pretending to be someone they're not.

More or Less AI?

What we've discussed up to now brings up an obvious question: Is AI ultimately positive or negative? Well, it's inevitable that human progress would eventually reach this point. Throughout history, whenever a new technology has emerged, some jobs have been lost, and new ones have been created. Hopefully, this will also happen with AI. Without buying into Terminator-style myths about AI enslaving us, we must accept progress for what it is because we can't fight reality.

However, it's crucial to enact strict laws and regulations to prevent the inevitable abuses of AI. For example, we need laws requiring users to be informed if they're interacting with a machine, prohibiting the creation of fake news, and mandating the clear labeling of AI-generated content.

Above all, language model generators like ChatGPT should be required to incorporate patterns that allow both humans and other machines to unequivocally identify human-generated and AI-generated content. Such laws would help limit abuses and highlight the advantages AI offers. Only then can this recent and highly controversial innovation be considered truly beneficial for progress and productivity.

About the author

The author Danilo Renzi

Danilo Renzi

is a web developer, webmaster, content creator and expert in tourism. He works as a freelance partner at the Ionenet S.A. Canadian company since 2003. He also worked for La Coronación S.A. incoming travel agency in Cuba as a partner from 2003 to 2010 and, currently, he continues to work as a freelancer.
"When a translation was made by the original author", he says, "it is really not a translation, just another version of the same writing. These are the only translations that do not betray!". For any inquiry, you can contact him by filling this contact form or in any of the social media mentioned in this site. You can also visit our about us page to learn more.
Back To Meetero Blog