Scroll to top

Fire your salesmen. Robots can be more persuasive than humans.


Kamila Hankiewicz - November 13, 2019 - 0 comments

“What you’re going to hear is the Google assistant actually calling a real salon to schedule an appointment for you” –

— Pichai, Google’s CEO told the audience a year ago when introducing Google Duplex — an automated voice assistant capable of generating human-like speech to make phone calls and book appointments on behalf of its user.

Recent technological breakthroughs in natural language processing and artificial intelligence have made it possible for machines, or bots, to pass as humans. Google Duplex’s speech is so realistic that the person on the other side of the phone may not even realise that they are talking to a bot. Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are.

A team of researchers at Computer Science at NYU Abu Dhabi decided to analyse the effect of human behaviour when discovered they are talking to bot. They conducted an experiment to study how people interact with bots whom they believe to be human, and how such interactions are affected once bots reveal their identity. It was found that bots are more efficient than humans at certain human-machine interactions, but only if they are allowed to hide that they are not a human.

In their paper titled Behavioral Evidence for a Transparency-Efficiency Tradeoff in Human-Machine Cooperation published in Nature Machine Intelligence, the researchers presented their experiment in a form of a game which participants were asked to play a cooperation game with either a human associate or a bot associate. The game based on a game theory, called the Iterated Prisoner’s Dilemma was constructed where each of the interacting parties could either act selfishly in an attempt to exploit the other, or act cooperatively in an attempt to attain a mutually beneficial outcome.

The researchers gave some participants incorrect information about the identity of their associate. Part of the group of participants who interacted with a human were told they were interacting with a bot, and vice versa. Through this experiment, researchers were able to determine whether people are prejudiced against social partners they believe to non-humans, and assess the degree to which such prejudice, if it exists, affects the efficiency of bots that are transparent about their bot nature.

The results showed that bots posing as humans were more efficient at persuading the partner to cooperate in the game. However, as soon as their true nature was revealed, cooperation rates dropped and the bots’ superiority was negated.

Realistic bots could put business development departments up side down. But is it ethical to develop such a system where humans are tricked thinking they are talking to a real person? Should we prohibit bots from passing as humans, and force them to be transparent about who they are?

How would you feel when after few minutes of engaging phone conversation, you discovered you’re talking to a bot? Or finding out this this text was written by a bot?* 🤯

*It wasn’t. I actually spent some time to write it up and share this with you.

Author avatar

Kamila Hankiewicz

Unlocking human potential by removing trite tasks.

Related posts

Post a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.