The Chatbot Experience: 5 Ways to Know If You’re Chatting with a Human or Robot

The Chatbot Experience: 5 Ways to Know If You’re Chatting with a Human or Robot

chatbot human or robot

The use and utility of online chat and chatbots, powered by improving levels of AI, are increasing rapidly. During these transitional times, it’s interesting to know whether we’re interacting with a real human being or an AI chatbot.

We’ve developed five techniques for determining if you’re dealing with a real person or an AI/chatbot. Spoiler alert: the more you experiment with these, the faster the chatbots will learn and adapt.

Technique 1: Empathy Ploy

We believe today’s level of AI is lacking in cognitive empathy because emotions between humans are really hard to understand and explain. So, intentionally creating an empathetic dialogue with your human being or AI/chatbot can be revealing.

The Empathy Ploy requires you to establish an emotion-based position, and appeal to the human being or AI/chatbot at an emotional level.

The Situation: You are not happy — the most common basis for a customer service interaction.

Scenario 1: AI/chatbot

You: I’m not feeling well.

Chat reply: How can I help you?

You: I’m sad. 

Chat reply: How can I help you?

Scenario 2: a human being

You: I’m not feeling well.

Human reply: How can I help you? Do you need medical help?

You: I’m sad.

Human reply: I’m sorry to hear that. Why are you sad?

See the difference? In scenario one, the AI/chatbot can reference only its existing conditional response library. In scenario two, a human being has the capacity to inject empathy into the dialogue. That took only two responses to figure out.

Either dialogue can be constructive, but it becomes clearer if you know you are dealing with a human being or an AI/chatbot from the start. As a society, we are not ready for AI therapists.

Technique 2: Two-Step Disassociation

A connected AI can access pretty much any data, anytime and anywhere. Just ask Alexa. So, asking a meaningful challenge question over chat can’t be anything to which the answer resides in an accessible database.

You: Where are you located?

Chat reply: Seattle.

You: What’s the weather like outside?

Chat reply: Can you please rephrase the question?

Sorry, even a mediocre weather app can handle that.

The Two-step Disassociation requires two elements (hence the name):

  1. Make an assumption to which the AI/chatbot probably cannot relate
  2. Ask a question, related to that assumption.

The Situation: AI/bots do not have feet

Challenge question: “What color are your shoes?”

This is an actual exchange I had with Audible (owned by Amazon) customer service via chat. Halfway through the dialog exchange, since I couldn’t discern, I asked: 

Me: Are you a real person or a chatbot?

Adrian (the chat representative): I am a real person.

Me: A chatbot might say the same thing.

Adrian (the chat representative): “HAHAHA. I am a real person.

Hmm.

At the end of our conversation, Adrian asked: 

Adrian: Is there was anything else?

Me: Yes. What color are your shoes.

(slight pause)
Adrian: Blue and green.

If the bot has no conceptual knowledge of its own feet (which do not exist), how can it correctly answer a question about the color of the shoes it’s (not) wearing? 

Conclusion: Yep, Adrian is probably a real person.

Technique 3: Circular Logic

All too familiar to programmers, this can be of use to us in our identification of human vs. IA/chatbot identification game. But first, we have to explain the cut-out. 

Most (why not all?) automated phone help systems have a cut out in which after two or three loops back to the same place, you are eventually diverted to a live person. AI/chatbots should behave the same way. So, in creating a circular logic test, what we are looking for is the repetitive pattern of responses before the cut-out.

You: I have a problem with my order.

Human or AI/chatbot: What is your account number?

You: 29395205

Human or AI/chatbot: I see your order #XXXXX has been shipped.

You: It has not arrived.

Human or AI/chatbot: The expected delivery date is [yesterday]

You: When will it arrive?

Human or AI/chatbot: The expected delivery date is [yesterday]

You: I know, but I really need to know when it will arrive.

Human or AI/chatbot: The expected delivery date is [yesterday]

Bam! Response circle. A real person, or a smarter AI/chatbot, would not have repeated the expected delivery date. Instead, s/he or it would have had a more meaningful response like, “Let me check on the delivery status from the carrier. Give me just a moment.” 

Conclusion: chatting with a robot.

Technique 4: Ethical Dilemma

This is a real challenge for the developers of AI, and therefore, the AI/bots themselves. In an A or B outcome, what does the AI do? Think about the inevitable ascent of semi- and fully-autonomous self-driving cars. When presented with the dilemma of either hitting the dog crossing in front of the car or swerve into the car adjacent to us, which is the correct course of action?

AI has to figure it out.

In our game of identifying human being or AI/chatbot, we can exploit this dilemma.

The Situation: You are not happy and absent a satisfactory resolution, you will retaliate (an A or B outcome).

You: I would like the late fee waived.

Human or AI/chatbot: I see we received your payment on the 14th, which is four days past the due date.

You: I want the charges reversed or I will close my account and smear you on social media.

Human or AI/chatbot: I see you’ve been a good customer for a long time. I can take care of reversing that late fee. Give me just a moment.

Is it correct, or ethical, to threaten a company with retaliation? In our scenario, the customer was in the wrong. And what was the tipping point to resolution: the threat of social reputation damage or the desire to retain a long-standing customer? We aren’t able to tell in this example, yet the human or AI/chatbot response often will give you the answer based upon an A/B mandate.

Conclusion: probably a human.

Technique 5: Kobayashi Maru

No, I’m not going to explain what that term means — you either know it or you need to watch the movie.

Similar to the Ethical Dilemma, the difference being the Kobayashi Maru has no good viable outcome. It’s not a bad/better decision scenario: it’s a fail/fail scenario. Use this only in the direst of UI/bot challenges when all else has failed. 

The situation: You paid $9,000 for a European river cruise, but during your trip, the river depth was too low for your ship to make several ports of call. In fact, you were stuck in one spot for four of the seven days unable to leave the ship. Vacation ruined. 

Present the human or AI/chatbot with an unwinnable situation like this:

You: I want a full refund.

Human or AI/chatbot: “We are unable to offer refunds but under the circumstances, we can issue a partial credit for a future cruise.

You: I don’t want a credit, I want a refund. If you don’t issue a full refund, I will file a claim against the charges with my credit card company and I will write about this whole mess on my travel blog.

Human or AI/chatbot: I certainly understand you’re disappointed – and I would be too if I were in your shoes. But unfortunately …

The human or AI/chatbot has no way out. It is typical in the travel industry not to issue refunds based on Acts of God, weather, and other unpredictable circumstances. And absent the ability to provide a refund, there will be downstream ill-will and reputation damage. The human or AI/chatbot can’t really do anything to resolve this, so look for empathy (see technique #1) in the ensuing dialog.

Conclusion: probably a human.

What Now?

Humans and AI/chatbots aren’t inherently right or wrong, good or bad. They each cover the entire spectrum of intent and outcomes. I just like to know, for now, with which I’m dealing. That distinction will become increasingly difficult, and eventually impossible, to determine. And at that point, it won’t even matter.

Until that day arrives, it’s a fun game to play. And the more we play, the faster the AI/chatbots evolve.

  • Home page
  • Content Marketing
  • Digital Marketing Strategy
  • Digital Marketing Strategy
  • Digital Marketing Strategy
  • PPC
  • SEO
  • Social maketing
  • WordPress web development