Which is more persuasive—an AI-voice assistant or a human?

Thu, Dec 19, 2019

Read in 7 minutes

OK, all you working (or even thinking about) AI-voice and brands. Time to strap on your propeller beanies.

Which is more persuasive—an AI-voice assistant or a human?

Step right this way, and buy your intellectual E-ticket for the week.

You’re about to venture forth into the world of social psychology’s construal theory…and learn why this kind of thinking is foundational to your brand’s AI-voice planning.

Dangerous? Only if thought-provocation is a scary thing. 😊

Your tour guides today are Tae Woo Kim of Indiana University’s Kelley School of Business, and his faculty adviser, Dr. Adam Duhachek. We’ll follow their recent and very relevant (to AI-voice and commerce) paper, Artificial Intelligence and Persuasion: A Construal-Level Account.

Let’s start with construal theory. (Open wide – this is good for you.)

At the risk of over-simplification, it’s all about the psychological distance between the self and an event. This psychological distance determines the way people envision the event, which is the so-called “construal-level” of a person.

Psychological distance is measured on several dimensions: temporal (how soon is the event going to happen?), spatial (how far away is the event location?), social (does the event involve interpersonally close individuals?), and hypothetical (how likely is the event?).

The shorter the distance, the lower the construal level. The greater the distance, the higher the construal level.

It’s about concrete, feasibility thinking versus dreamy, desirability thinking.

Low-level construal occurs when the psychological distance between the self and the event is shorter: when the event occurs sooner, closer, to the people close to you, and with high likelihood. For example, an earthquake can lead to low-construal when it is happening now (versus one year later), in your town (versus the other side of the world), to you and your family (versus people you don’t know), and with high (versus low) probability of it happening (yes, my hotel room is swaying back and forth).

In low-level construal, you think concretely. You’re searching for feasibility, for how and now answers. Like, what do I do now?

In contrast, high-level construal focuses more on desirability. Imagine there’s a conference in an exciting tourist city – say, Rome. You’re interested. If the conference is scheduled next year (i.e. high-construal event), you are likely to focus on “why” to attend the event. Will it be fun? Can I learn something? What else can I get by attending? Could I can take my partner with me and turn it into a mini vacation?

In high-level construal, decision dimensions are lengthened. At the extreme, it means someday, somewhere, maybe with someone, and whenever. As such, you think abstractly—you can float on the winds of why.

Want to make a message effective? Create construal congruence.

Failure to grasp a consumer’s place on the construal framework can throw marketing messages way off target and flush marketing monies down the hole. You can see how construal theory ties to the shopper’s path to purchase. The funnel begins and is wide open when the shopper is in high-level construal thinking—hopes, dreams, aspirations. The funnel narrows, however, as desirability gives way to lower-level feasibility (Can I afford it? How will it use it? Where do I get it?) decision-making.

There it is: construal theory 101. You’ve got it. Congratulations. Now tighten the seat belts, because:

This is where this gets really interesting. Let’s apply this to AI-voice and chatbots.

In their study, the good Drs. Kim and Duhachek decided to test the persuasiveness of AI-enabled consumer answers (including voicebots) using a construal theory framework. You can find their paper here.

Tae Woo and Adam – please explain!

OK. Let’s say you’ve a consumption decision before you. Now let’s say you’re receiving information—perhaps even advice—from an AI-voice assistant or third-party platform bot.

Will you find it believable? Persuasive? Will the type of messaging (low- or high-construal) make a difference in effectiveness?

Might an AI-voice or bot response be more persuasive than a human?

Tae Woo and Adam set up an experiment. The first group of participants were asked to browse an online medical site and were told that they would receive instant medical advice from an online doctor. A real human.

The second group was instructed to browse a website describing the medical diagnosis abilities of IBM’s Watson AI service—especially in the realm of skin cancer—and were told they’d receive instant medical advice from Watson.

All participants were given a pre-test: how much did they trust their potential source of advice? The answer, and this is telling: there was no significant difference in trust between the human and Watson.

Then, all the participants filled out a questionnaire that purported to measure their risk of skin cancer. Once that was completed, they received medical advice—either a high-construal or a low-construal message—from their respective medical adviser (the human doctor or the Watson AI.) And after that, they were asked about their intention to apply sunscreen.

Which message—from which source—led to a higher intention? Which was more persuasive?

When the message came from the human doctor, there was no significant difference between the low-construal (detailed and feasible, as shown on the right, below) and high-construal (high-level and desirable, as shown on the left) message in persuading individuals to use sunscreen.

But when the message came from the IBM Watson AI service, the low-construal (detailed and feasible) message was clearly more effective than the high-construal. And—more importantly—the low-construal message was more persuasive when coming from Watson AI than from the human.

Whoa. Serious news for customer service leaders all throughout consumer-facing industries.

But why?

Rosie the Robot: truth-teller.

Well, simply stated, we generally expect bots to be low-construal. Detailed. Data-rich. Capable of remembering everything. Monosyllabic. And, incapable of devious or manipulative thought. It’s an expectation that has probably been shaped from the first days of science fiction movies and by Rosie the Robot Maid of The Jetsons.

Hmmm.

But what happens in an age of AI-voice assistants that learn? What happens in an era of Google Duplex, where a bot is multi-syllabic, with the pauses, stutters, and hesitations that mark human speech?

What happens when technology passes Rosie by?

What happens when the technology advances beyond conventional expectation? As it is doing now?

Tae Woo and Adam then took their study to one more level. And now had participants interact with Amazon’s Alexa AI-voice assistant.

A similar set-up. Both groups read about Alexa. But the control group was taught that Alexa had an experiential learning capacity. No more a robot. Now, potentially, a thinker.

The result? For those who perceived Alexa as more than a fixed-answer bot, the high-construal message was more effective than the lower-construal.

Takeaway: they’ll believe it—if they believe it’s possible. Messaging about the AI-voice platform itself will increase the effectiveness of AI-voice-delivered messaging.

OK, kids, the ride’s over. Take a deep breath. Unbuckle the beanies. What’d we learn?

First, that AI-bots can be more effective than humans in delivering low-construal messages. Because we the humans generally expect AI-based, computer-based devices to be unerringly accurate and emotion-free. Which may generally be our preference when in search of detailed status information.

It begs the question: should call center and customer service bots be more robotic in tone? Might they be more believable?

Second, is that we the humans will respond to AI-voice at the level of our expectations. Silicon Valley data scientists may be extremely ready for high-construal messaging from AI-voice platforms and chatbots. My truck-driving cousin with a flip phone may not.

Who’s your customer base?

Third is not a lesson, but an open question—and especially as we learn about construal theory. Is it possible that the always-there, always-ready, always-on, always-at-your-command presence of AI-voice assistance compresses psychological distance?

Is it—potentially—the perfect service companion? Is it Rosie the Robot writ large for every consumer?

Exciting new world. Exciting research from Tae Woo and Adam.

And with the promise of more to come from the researchers at Indiana’s Kelley School of Business.

Voice in commerce.

Can you hear it coming?

#OpenVoice #ConversationalCommerce #MITAutoIDLaboratory #IUKelleySchool #IndianaUniversity