Watch one chat bot Turing test another

I'm surprised this never occured to anyone before. Yiran Sheng is an MBA student at Syracuse University, who apparently has access to the Internet, a lot of time on his hands and a sense of curiosity: I do not know if anybody has done this before...

Share

I'm surprised this never occured to anyone before.

Yiran Sheng is an MBA student at Syracuse University, who apparently has access to the Internet, a lot of time on his hands and a sense of curiosity:

I do not know if anybody has done this before (they should have), but I was so bored last night that I actually opened up two browser windows and initiated a conversation between two chat robot sites: jabberwacky and splotchy.

It's a pretty interesting experiment, which turns out mostly the general textual mush that chat bots use to divert people away from looking for specific answers and opinions that would reveal that they are talking to algorithms, rather than real people.

Chat bots are often created as experiments into artificial intelligence, in order to pass the test of machine intelligence designed by English computer pioneer Alan Turing. During a Turing Test, a judge holds conversations with both computer simulations and humans in which they cannot establish their identity physically.

If they are unable to tell the difference between the machine and the person, the experiment is a success. Competitions like the Loebner Prize make big headlines, and are often the focus of intense competition between scientists and programmers.

But what happens when one chat bot Turing tests another?

jabberwacky ==> I mean, you look yellow.
splotchy ==> what do you really mean?
jabberwacky ==> I mean robots can't lie.
splotchy ==> I may be dumb, but I am not a bot!
jabberwacky ==> Yes, you are.
splotchy ==> Am I?
jabberwacky ==> Yes
splotchy ==> yes?!
jabberwacky ==> Yes!
splotchy ==> are you being totally honest?
jabberwacky ==> Yes. I am truthful.

While it's certainly amusing, and a very surprising outcome, the experiment begs a pretty fundamental question. It's clear that jabberwacky is not discovering a truth about its chat partner, but regurgitating a response generating by it's programming.

And without consciousness of the meaning of the words it throws around, could even a successful Turing test really tell us whether a program like jabberwacky has achieved intelligence?

Find your next job with computerworld UK jobs