Saturday, 23 June 2012

Alan Turing

     Today is the 100th birthday of Alan Turing, and a lot of my friends are circulating graphics on Facebook honouring the great computer scientist. Many of these graphics also mention the grave injustice committed against Turing (and against many others whose names are largely unknown, other than Oscar Wilde) when he was prosecuted for homosexuality. As a computer user, of course, I am in debt to Alan Turing for his contributions to the foundations of this technology. But when I think of Alan Turing, what comes to mind for me most often is the Turing Test.

     Early in the emergence of computers, people spoke of them as "electric brains", and excitedly speculated about how one day soon, computers would be able to think like humans. Well, that's not materialized quite yet, as it turns out human brains work quite differently from binary computers. But it may yet happen, and so the question they asked back then is still worth asking: How could we tell?

     Turing suggested a fairly simple way to answer, which became known as the Turing Test, and works like this. You put the computer in one room, and a human in another, and you allow them to communicate via a teletype machine. (Nowadays, we'd just say to use a chat client or something like that, but it's pretty much the same thing.) The human is not told who or what is on the other end of the machine, but asked to form an opinion as to whether or not it's a human based on the conversation they have over the teletype. If, after a sufficiently lengthy conversation, the human is unable to tell if she's talking to a computer or a human, the computer has passed the Turing Test, and may be considered an actual thinking being.

     Turns out this is a lot harder than it seemed at first. Writing a program just to understand natural language is an amazingly complex task, let alone a program that can formulate an intelligible and relevant response. But it's actually relatively easy to make something that superficially looks like it's doing both. You may have heard of ELIZA, a program written in the mid 1960s which took input in the form of typed English sentences and generated an output (also usually a complete English sentence) that appeared to be an appropriate response to the input. For example, if you typed in something like, "I feel like nobody understands me," ELIZA might respond, "Why do you feel like nobody understands you?"
     Of course, that's not really an intelligent response. All you have to do is strip out every instance of "I" and "me" and replace them with "you", and put "Why do " in front of the resulting string, replacing the final period with a question mark. You'd be surprised how often that works for sentences beginning with the word "I". And ELIZA was programmed with a few similar, simple transformational rules that allowed it to produce surprisingly natural-sounding responses.
     Yet you never got any kind of spontaneous independent thinking from ELIZA. True, there were a couple of tricks built-in to simulate a couple of semi-spontaneous observations. For example, if there were no mentions of any of a list of keywords ("mother", "father", etc.) for some length of time, ELIZA would say something like, "I notice you are avoiding the subject of your family." But even with this, something was always missing. It wouldn't take very long for a most people to decide ELIZA was not actually human, assuming they were aware of the possibility it wasn't. It'd be even faster today, with the variety of chatterbots out there on the web.

     I have on occasion had the very great privilege to teach philosophy at the university level, and every time I've done so, I've used the Turing Test to explain what it is I want from my students when they write essays. Some students think that the ticket to a good grade is to agree with whatever the professor's opinion seems to be, and I suppose that's actually the case with some professors, but that's not what I wanted from my students. A computer program like ELIZA could repeat or paraphrase what was copied down in notes from lectures, or parsed from the readings. I didn't want that. I told my students I wanted them to pass a Turing Test, to convince me that the papers I was going to grade were written by intelligent, independent thinkers. And I'm pleased to say that most of the time, they were.

No comments:

Post a Comment