Unthinkable 2


computer-shrink

No time to read? Listen to the podcast.

 

One thing I learned in therapy is that I have significant issues with my mother. Another thing is that therapists have a collection of secret little therapist tricks to get you to talk about your mommy issues.

You lean in, they lean in. You make eye contact, they make eye contact. You stare at the ceiling, they cough until you look at them and/or apply the Heimlich maneuver.

So when my friend, virtual-reality researcher, and inventor Jacki Morie introduced me to SimSensei, the clever software system behind a computer-created virtual therapist, I thought the NSA had hacked into some therapist’s secret little tricks file.

A life-like image on a computer screen ran through all those little tricks better than most real therapists. I was careful to not look at the ceiling. I had no idea how to do the Heimlich on a Vizio 50-inch monitor.

I’ve seen stoic adults and giggly 8th-grade girls pour their hearts out to Eliza, sharing secrets that could keep them out of politics forever.

“SimSensei,” Morie explained, “uses a webcam to read facial expressions and body language. The program also converts what you say into text to extract meaning and prosody, the rise and fall in voice pitch to determine some emotional affect..” From that data the virtual therapist fashions a response. An unnerving response.

Although its physical actions were startling, the virtual therapist’s verbal parrying had a familiar ring to it.

It sounded like Eliza.

No, not my mother. Eliza is the brainchild of Joseph Weizenbaum, a 1960s computer scientist and the first person to lose his job because of artificial intelligence. Only, he wasn’t replaced by an artificially-intelligent computer, he built the first one.

And then lived to regret it.

Weizenbaum designed Eliza to mimic the behavior of a Rogerian therapist, and even in the 21st century, after 50 years of living with technology has sucked most of the awe out of each new invention, it’s easy to forget this 1960’s computer program isn’t human. (Google can point you to multiple versions of Eliza, some better than others, and you can see for yourself.) A conversation with Eliza might go something like this (Eliza’s responses are in capital letters):

Men are all alike

IN WHAT WAY?

They’re always bugging us about something or other

CAN YOU THINK OF A SPECIFIC EXAMPLE?

Well, my boyfriend made me come here

YOUR BOYFRIEND MADE YOU COME HERE?

He says I’m depressed much of the time

I AM SORRY TO HEAR YOU ARE DEPRESSED*

I’ve seen stoic adults and giggly 8th-grade girls pour their hearts out to Eliza, sharing secrets that could keep them out of politics forever.

But Eliza is a parlor trick, and Weizenbaum a shrewd illusionist. He wrote a program that parsed the user’s input into parts of speech, and then used a limited set of grammatical rules to construct responses. This was Weizenbaum’s attempt to debunk claims that computers could be intelligent.

For example, Eliza has rules for creating the past tenses of verbs, for selecting appropriate pronouns, and for converting statements into questions. It was a testimony to Weizenbaum’s creativity and understanding of human nature that he was able to fool both computer scientists and lay people alike.

One group he fooled, however, frightened him.

Professional psychologists solicited his help in deploying Eliza as a therapeutic tool. Weizenbaum warning them that Eliza could do more harm than good in cases where patients needed expert care. When these professionals, whose comments were often sensationalized by the popular media, persisted in their efforts, Weizenbaum left his job in computer research.

He became an outspoken critic of artificial intelligence for the rest of his life.

“The danger is we tend to personify our computers,” Morie said, “and how much more are we going to personify them when they look and act like human beings? If someone had a total breakdown in front of SimSensei—somebody kills themselves because the virtual human didn’t say the right kind of thing—then what?”

Morie could be channeling Weizenbaum. In 1981 he told the Boston Globe, “The relevant issues are neither technological nor even mathematical; they are ethical. Since we do not now have ways of making computers wise, we ought not now give computers tasks that demand wisdom.”**

The danger, it seems, is not in our stars but in ourselves. Artificial intelligence is scary and threatening in the way a horror movie is: by suspending our disbelief we give the movie—or the virtual therapist—the power to frighten us.

As the product of choice, not calculation, human behavior is not mechanical. It is a uniquely human quality.

Just like issues with your mother.

*From Weizenbaum’s paper “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man And Machine” http://web.stanford.edu/class/linguist238/p36-weizenabaum.pdf

 

Start your Sunday with a laugh. Read the Sunday Funnies, fresh humor from The Out Of My Mind Blog. Subscribe now and you'll never miss a post.

 

Dr. Jacquelyn Ford Morie - The Out Of My Mind Blog

Courtesy of Dr. Jacquelyn Ford Morie

Dr. Jacquelyn Ford (Jacki) Morie has built her career around advancing technology to create meaningful, healthful, and joy-giving experiences. Over the past 25 years she has developed multi-sensory techniques for virtual reality (VR) that can predictably elicit emotional responses from participants, including a scent collar that can deliver custom scents to a participant in VR.  She has worked extensively with smart avatars and was a principle investigator on the NSF-funded project Virtual Girl Guides—life-sized twin conversational avatars that were installed at the Boston Museum of Science from 2009 – 2012. With her company All These Worlds, LLC, she is using her VR techniques to expand the capabilities of 3D virtual worlds (social VR) to provide stress relief for soldiers with online Mindfulness classes, and is working on a NASA-funded virtual world eco-system for future Mars-bound astronauts who need support to endure long separations from family and Earth.

 

Joseph Weizenbaum was a German and American computer scientist and a professor emeritus at MIT. In 1966, while at MIT, he published a comparatively simple program called Eliza, named after the ingenue in George Bernard Shaw’s  Pygmalion. It was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. His influential 1976 book,  Computer Power and Human Reason, displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. In 1996, Weizenbaum moved to Berlin and until his death he was Chairman of the Scientific Council at the Institute of Electronic Business in Berlin.

 

Mind Doodle…

A single-character typo in the software aboard the Mariner 1 caused the spacecraft’s computer to think the rocket was behaving erratically. It wasn’t. But when the computer corrected for the phantom behavior, Mariner 1 DID behave erratically—so much so that the range officer hit the DESTRUCT button. A single character. Anyone ready for an artificially-intelligent brain surgeon?


Leave a comment

Your email address will not be published. Required fields are marked *

2 thoughts on “Unthinkable

    • Jay Douglas Post author

      Hi Nick…

      I’m glad you liked it.

      There are many versions of Eliza available on the internet these days. But, alas, most of them fall far short of Weizenbaum’s original.

      I had access to an improved, but very close to original, version of Eliza when I was a student at Berkeley. It was as freaky as described. Me being me, I tried to crash it with unexpected statements but it hung in there and kept steering me back toward “normal” conversation.

      It wasn’t human, I tells ya, it wasn’t human.

      –jay