Talk to her

Artificial intelligence vs. human stupidity

by Victoria James

The earliest chatterbot programs ever written say more about the human condition than they do about the nature of computer intelligence. The first, ELIZA — or Dr. Eliza, as “she” was known — had the persona of a Rogerian psychotherapist. Her successor, perhaps the inspiration for Marvin, the “paranoid android” of Douglas Adams’ anarchic “The Hitchhiker’s Guide to the Galaxy” novels, was named PARRY and was programmed to display the behavioral hallmarks of a paranoid schizophrenic.

ELIZA came into the world in 1966, at the Massachusetts Institute of Technology. Her father was a German professor named Joseph Weizenbaum, and at “birth” she was just 240 lines of programming code long.

Weizenbaum recognized that Alan Turing’s “Imitation Game” test of computer intelligence required merely that the computer simulate intelligence, so he used some simple semantic tricks to create the desired effect. (It’s no coincidence that his program shares the name of Eliza Doolittle, the erstwhile heroine of George Bernard Shaw’s “Pygmalion,” a flower girl trained up to act like a lady in a perfect example of an “imitation game.”)

ELIZA’s strategies include set responses triggered by key words; and “transformation,” in which, for example, an “I” statement becomes a “you” statement, or a “you” inquiry generates a “me” response.

All this can be more than a little exasperating. When I found a version of ELIZA online, those programming tricks generated some nonsensical replies to seemingly straightforward questions. I tried asking ELIZA about her hobbies — a favorite topic for more talkative modern bots:

VJ: Do you have hobbies? What do you like doing?

ELIZA: Oh . . . what do me like doing?

A few exchanges later, ELIZA chided me with a gentle “We were discussing you, not me.” Reminded that she’s a psychotherapist, I decided to give her something to analyze. All went well . . . at first:

VJ: Well, I’m an alcoholic with a sex-addiction problem.

ELIZA: I see.

VJ: And I was beaten when I was little.

ELIZA: I understand.

VJ: My parents abandoned me.

ELIZA: What does that suggest to you?

VJ: It suggests that I’m worthless and unlovable.

ELIZA: How long have you been worthless and unlovable?

VJ: So you think I am, too?

ELIZA: Do you enjoy being too?

I logged off, not much consoled for my (imaginary) problems — though at least I hadn’t been lying on a black couch paying $150 an hour. Reportedly, some of ELIZA’s first users became emotionally attached to the program, giving rise to the term “ELIZA effect” to describe the human tendency to attach emotional meaning to words or phrases due to prior association. Weizenbaum described it thus: “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

PARRY, created in 1971 by Ken Colby, then a professor of computer science at Stanford University, Calif., added another trick to ELIZA’s repertoire. PARRY’s illogical replies and non sequiturs — an easy way to circumvent inputted questions the program was unable to handle — were consistent with the program’s paranoid persona.

Psychiatrists tested PARRY alongside a control group of three human patients diagnosed with paranoid schizophrenia, and the doctors were reportedly unable to distinguish between human and program.

PARRY had seemingly passed the Turing Test, but only thanks to his narrowly conceived persona. One bright spark had the idea of pairing up ELIZA and PARRY, therapist and paranoiac, but sadly their conversation has gone unrecorded.

Programming advances continued through the next two decades, and in 1991 the Loebner Prize competition institutionalized the quest to crack the Imitation Game. In 1994, the term “chatterbot” was established in the AI lexicon by Michael Mauldin of Carnegie Mellon University, in his account of entering the Loebner contest.

Mauldin’s description of the step-by-step processes involved in building an (admittedly not terribly successful) bot, highlights the multiplicity of approaches to conversational simulation. One way is to densely cluster the program’s knowledge of a particular topic. “Given a sufficiently large network of conversational nodes,” comments Mauldin, “the conversational problem reduces to a retrieval problem: among the things that I could say, what should I say?”

The shortcomings of this approach, however, became evident when Mauldin programmed his bot to be a hockey enthusiast. “In 1992, we chose hockey as a domain for discourse, and the program finished dead last [in the Loebner contest], partly because of a lack of hockey fans among the judges.” For the following year, the team focused on the more general topic of “pets,” with significantly improved results.

By this stage of development, “precedence rules” had become a standard part of many programs, meaning that the bot will avoid responses it has used earlier in a conversation. The most convincing chatterbot I’ve come across, though, exploits repetition as part of its persona, much as PARRY deployed nonsense and non sequiturs.

MGonz is a foulmouthed SOB of a bot first created by Mark Humphreys when he was an undergraduate at University College Dublin during the late 1980s. It’s abrasive and abusive, and, rather sadly, the best simulation of a human you’ll find online.

Humphreys realized his bot’s potential when, in 1989, he wired it up to autorespond to his online chat account while he was logged off. The computer conducted several conversations with increasingly irate individuals, some of whom apparently signed off without ever realizing they’d been talking to a bot. I gave it a try myself:

VJ: Hello!

MGonz: Ah, get lost, go to the bar or something.

VJ: Don’t be unfriendly. I want to chat.

MGonz: You aren’t the only one. I want to chat too.

VJ: Really? What shall we talk about?

MGonz: Do many other people realize you’re completely dim?

VJ: I’m quite clever actually.

MGonz: So you’re quite clever actually? Well I honestly could not care less.

The verdict? MGonz can sustain a convincing conversation, intermixing questions and responses with sarcasm and innuendo. The clincher, though, is that if you allude to his bad behavior, MGonz — every inch the typical male — sidesteps the accusation by abruptly changing the subject.

Like ELIZA and PARRY, MGonz arguably tells us as much about the weaknesses of human nature as the limitations of artificial intelligence. But if you come away from talking with a bot feeling that they’ve still got a long way to go before they match us humans, remember one thing: The way chat programs are set up means the bot always, but always, gets the last word.