Cyranoid conversations

Phones, Internet, Computers and such.
User avatar
vladimir
The Pun-isher
Posts: 6077
Joined: Mon May 12, 2014 6:51 pm
Reputation: 185
Location: The Kremlin
Russia

Cyranoid conversations

Post by vladimir »

From Le BBC

What would happen if a computer could take over a human body? David Robson meets an “echoborg” – a strange experiment that reveals our future with artificial intelligence.

By David Robson
20 July 2015

AI: The Ultimate Guide
Sophia Ben-Achour looks like a typical London student. She has short, brown hair, dancing eyes and a wide smile. We talk about the weather, music, and her life teaching English to foreign students. You would never believe her mind is being controlled by a bunch of processors more than 200 miles away.
Being possessed by a computer isn’t too disconcerting, she later tells me. “Today was the first time I thought, ‘Eugh, I’m just a body and nothing else’.”
To be technical, Sophia is an “echoborg” – a living, breathing person who has temporarily given themselves over to become a robot’s mouthpiece. Whatever she says originated in a chatbot across the internet, and is then fed to her through a small earpiece.

Never for a minute did they think they were talking to a human-computer hybrid
It sounds a little like dystopian science fiction. But in fact, the bizarre set-up has a serious purpose. The researchers at the London School of Economics want to understand the way that the body and appearance of artificial intelligence can shape our perception. It’s particularly crucial to our understanding of AI because in the future we may be surrounded by machines that are indistinguishable from ourselves – and we need to know how we will react.
“Most of the time we encounter AI today, it’s in a very mechanical interface,” says Kevin Corti, one of the researchers. His answer was to use a human body, with a robot mind. “This was a way of doing it so that people actually believe they are encountering another person.”

Corti and his supervisor Gillespie were inspired by work on “cyranoids” – an experiment invented by the controversial psychologist Stanley Milgram. He is most famous for his notorious experiment on obedience – in which he asked subjects to inflict electric shocks, of increasing voltage, on another person. Even when it looked like the shocks were becoming dangerously painful, many continued – apparently showing how easily we will harm another rather than question authority.
By the end of his life, however, Milgram seems to have turned his mind to the gentler subject of body perception. When we are talking to someone, do we really listen to what they are saying? Or are our opinions pre-formed, based solely on what a person looks like?

To find out, he took a leaf out of a 19th Century play, Cyrano de Bergerac. De Bergerac is an accomplished soldier, a witty poet, and a skilled musician, but he considers himself too ugly to woo his pretty cousin Roxane. So instead, he offers to help his handsome-but-dim rival Christian win her hand instead, by telling him exactly what to say. In the most famous scene of the play, Cyrano hides in the bushes, quietly prompting Christian with sweet nothings, who then repeats them to Roxane, who is standing on her balcony.
A similar set-up, Milgram realised, would be the perfect way of testing how much our body shapes perceptions our mind. So he set up experiments in which a “cyranoid” is fed words from another person in another room, through a small radio transmitter in the ear. Like Roxane in the play, the other people they encounter are not told of the strange set-up.

The fact that everything the other person said originated in the mind of a distal source simple didn’t enter the picture
Milgram was amazed to discover that most people have lengthy, 20-minute conversations with the cyranoid without noticing that anything was amiss. “The fact that everything the other person said originated in the mind of a distal source… simply doesn't enter into the picture,” he wrote. He then started to push the set-up further and further, seeing, for example, to see what happens when an aged professor controlled a young child, for instance.

Milgram died in 1984, before he’d finished his enquiries, and his cyranoid experiments remained forgotten for two decades before Gillespie decided to revisit the idea. One of his first attempts replicated one of Milgram’s experiments with a child cyranoid. In this case, Gillespie fed lines to an 11-year-old boy, being interviewed by a panel of adults. Their capacity to patronise was astonishing. “I was questioned about graphene, Henry James, the death of Margaret Thatcher, the financial crisis and they go, well, that boy’s pretty clever, he’s probably got a mental age of 15,” remembers Gillespie.

Living zombies
The philosophical point is that people are really bad at judging if someone is acting of their own free will, or if they are a “zombie” acting another’s commands. “No matter how big the incongruity between body and mind, people won’t notice,” says Corti Or, put another way – we go by appearances more than judging the substance of someone’s thoughts.
It’s easy to be sceptical, but I too was taken in when I visited the team’s labs and Gillespie introduced me to Corti. The young woman in front of me certainly didn’t look like the pictures I’d seen of Kevin, I thought. I told myself that maybe Kevin was transgender. In fact, Kevin was in another room, and I was talking to Sophia. To me, it showed how easy it is to fall for the trick even when you might be expecting it. Clearly, knowledge of the cyranoid illusion isn’t enough to break its spell.
We can put the machine mind in a human body, to see what happens

With these successes, Corti and Gillespie began to think of further extremes to test, and that was when they had their ‘aha moment’, he says. “We thought, can we push it further so you don’t even need a person to generate the words.” The result is an “echoborg” – a new breed of cyranoid in which a person relays the words of a chatbot, rather than a real person.

In this way, Corti hoped to catch a glimpse of the future, when robots are more realistic. “We are very long way away from a perfectly human interface, but we can leapfrog that and put the machine mind in a human body, to see what happens.”
In one set-up, the participants were completely naive – they had not been told they might be speaking to a robot. As Milgram might have predicted they were still clueless, even after lengthy conversations. “The vast majority of participants never picked up that they were speaking to the words of the computer program. They just assumed the person had some problem, where there social inhibitions had been diminished. Never for a minute did they think they were talking to a hybrid,” says Corti.
Science fiction – including the recent TV series Humans – has long questioned whether robots could one day integrate in our society, to the point we no longer make a distinction between who is alive and who is electronic; Corti’s experiments suggest that even with today’s limited AI, we are already more easily fooled than you would imagine.
There’s a catch, however. The team later tried to replicate the classic “Turing Test” of artificial intelligence. So in another condition, the participants were told that they may or may not be talking to a robot – and they had to guess which. Surprisingly, in this situation, the chatbot was less likely to pass the Turing Test than if they had been chatting on a screen. “The bar is higher – people expect more,” says Gillespie.

It’s a kind of paradox: without being told that we might be speaking to an AI, we simply assume that our conversation partner lies in the stranger part of the spectrum of human behaviour. But as soon as we know that it might not be human, we are extra critical of its failings. “We might have AI’s intelligent beyond belief but our knowledge of them being non-human might limit our desire to interact with them in a human way,” says Corti.

Uncanny, or awkward?
Gillespie is particularly interested in the dilemmas that come with this kind of face-to-face encounter. “You can’t just walk away if eyes are looking at you,” says Gillespie. “There’s obligation – and that’s why people feel awkward, whereas you would never feel awkward talking to a screen,” explains Gillespie.
I can relate to this. When I visited their labs and spoke to Sophia, she was very good at making the words flow naturally. Consider the following exchange:
Me: What’s in the news today?
Sophia: According to the BBC, the UK will vote on the EU, but not on 5 May.
M: OK. Now, why don’t you ask me a question?
S: Who is your favourite band?
M: I’m not really into bands but I do like songwriters like Joni Mitchell.
S: Joni Mitchell, sure, I like her a lot. She often stops by to say hello.
In fact, had we been talking online, I would have been fooled (though I might have assumed she was a confabulator). But face-to-face with her strange answers, any pause or hiccup became excruciating – in a way I wouldn’t have experienced, had we been chatting over instant messenger.

For this reason, Gillespie thinks talking robots may face a problem akin to the “uncanny valley” effect. The more realistic computer animations of humans become, the more uncanny they appear – a phenomenon seen in some CGI films. When you add conversation into the picture, it becomes even more unsettling. “As artificial intelligence starts to get close to pass for human, it’s not just uncanny, it’s awkward. There’s a kind of awkward valley,” he says. It’s a strange thought that our primary emotion in the face of future robots won’t be the terror that science fiction predicts – but embarrassment.

Overcoming that kind of issue will be crucial if AI is to become part of our lives, which is why Corti and Gillespie are keen to continue the echoborg experiments. “I think it really gets us ahead of the curve,” says Gillespie. “If you look at history, we think it’s powered by technology – but it’s not. It’s public opinion. They will decide if these things are on the streets as police officers, so a huge amount of work needs to be done to see how we relate to it.”

At their heart, these experiments may even question what means to be alive. Just consider if there really was a robot that looked like Sophia the echoborg, but without the stilted conversation of the chatbot. “Is there something more to us than our behaviour?” asks Corti. “Because if a machine can mimic physical and behavioural dimensions of us, then why wouldn’t we call that human?”
Jesus loves you...Mexico is great, right? ;)
Advocatus Diaboli
Expatriate
Posts: 833
Joined: Mon May 04, 2015 9:42 pm
Reputation: 0
Location: Timbuktu

Re: Cyranoid conversations

Post by Advocatus Diaboli »

Hey Vladimir :) Brevity is the soul of wit !!!! (or do you really expect that I'mgoing to read that 4 meter long story?)
Post Reply Previous topicNext topic

Who is online

Users browsing this forum: No registered users and 322 guests