Ashok Goel, a professor at Georgia Tech, made the news this week with the revelation that one of the TA’s that he used in his AI course was actually an ”AI.” Now, I have no reason to believe that Goel was trying to do something wrongheaded. I think he was just playing around. But the media love AI stories these days and have yet again led the public on a very wrong headed journey about what AI is and what it can be.
Could a chatbot be an effective TA? It could certainly beat a rather disinterested TA. There are many real TA’s who are not really very good or very interested in what they are being made to do. But that is not an important question. The right question is whether we could make a real AI program that could perform this function as well as a very good person does? The news media want us to believe we are on our way to that. I have a different point of view. It is not that we can’t do it. It is that we aren’t actually trying to do that yet. To explain this, I need to provide some history.
The first of what are now called chatbots was called ELIZA and was written by Joseph Weizenbaum. This is well known. What I will now tell you is not so well known. Weizenbaum was a friend of mine. Ken Colby was a friend of mine as well. I know some things about all this that others may not know.
Colby and Weizenbaum were friends, at least they started out as friends. Colby was a psychiatrist who became interested in whether computers could function as therapists. I was hired by Colby (in 1968) to help him at the point where he was building PARRY. PARRY was supposed to be a paranoid patient. Colby was interested in showing that he could fool therapists who would not know if they were talking with a real person or a computer. PARRY supposedly lived in a locked ward of a hospital and it was Colby’s idea to build an effective model of paranoia and prove it was effective by fooling therapists. In essence, he was trying to pass the Turing test with PARRY.
He and Weizenbaum talked about these issues for years and at some point had a falling out. Weizenbaum wanted to show up Colby by building program that could fool people at MIT into thinking it was a Rogerian therapist. He chose Rogerian therapists, in part, to make fun of Colby, but also because Rogerians don’t usually say that much in response to what their patients tell them.
An example of ELIZA that I found on the web is:
How do you do. Please state your problem.
>> I spend too much time with computers.
Why do you mention computers ?
>> You are one.
Does it please you to believe I am one ?
>> Yes.
You seem to be quite positive.
>> I suppose I am.
How long have you been ?
Weizenbaum found that people wound up pouring their hearts out to ELIZA and he was very upset by this. He wrote a book attacking AI, called Computer Power and Human Reason, to explain why he thought AI would never work. The irony is, of course, that Goel’s program did no more than what ELIZA did in the 60’s (possibly even less), but it is now worthy of articles in the Wall St Journal and the Washington Post. Key word analysis that enables responses previously written by people to be found and printed out, is not AI. Weizenbaum didn’t think he was building a Rogerian therapist (or doing AI) really. He was having some fun. Colby was trying to model a paranoid because he was interested in whether he could do it. He did not think he was building a real (AI) paranoid. And, I assume Goel does not think he is building a real AI TA. But the press thinks that, and the general public will soon think that, if it keeps publishing articles about things like this.
This technology is over 50 years old folks. Google uses key words, as does Facebook, as does every chatbot. There is nothing new going on. But we all laughed at ELIZA. Now this same stuff is being taken seriously.
What is the real problem? People do behave in any way that is anything remotely like what these “AI’s” do. If you tell me about a personal problem you have, I do not respond by finding a sentence in my memory that matches something you said and then saying that without knowing what it means. I think about your problem. I think about whether I have any reasonable advice to give you. Or, I ask you more questions in order to better advise you. None of this depends upon key word and canned sentences. When I do speak, I create a sentence that is very likely a sentence I have never uttered before. I am having new ideas and expressing them to you. You say your views back to me, and a conversation begins. What we are doing is exchanging thoughts, hypotheses and solutions. We are not doing key word matching.
It may be that you can make a computer that seems paranoid. Colby had a theory of paranoia which revolved around “flare” concepts like mafia, or gambling, or horses. (See his book Artificial Paranoia.) He was trying to understand both psychiatry and paranoia using an AI modeling perspective.
The artificial TA is not an attempt to understand TA’s, I assume. But, let’s think about the idea that we might actually like to build an AI TA. What would we have to do in order to build one? We would first want to see what good teachers do when presented with problem students are having. The Georgia Tech program apparently was focused on answering student questions about due dates or assignments. That probably is what TA’s actually do which makes the AI TA question a very uninteresting question. Of course, a TA can be simulated if the TA’s job is basically robotic in the first place.
But, what about creating a real AI mentor? How would we build such a thing? We would first need to study what kinds of help students seek. Then, we would have to understand how to conduct a conversation. This is not unlike the therapeutic conversation where we try to find out what the student’s actual problem was. What was the student failing to understand? When we try to help the student we would have to have a model of how effective our help was being. Does the student seem to understand something that he or she didn't get a minute ago? A real mentor would be thinking about a better way to express his advice. More simply? More technically? A real mentor would be trying to understand if simply telling answers to the student made the best sense or whether a more Socratic dialogue made better sense. And a real TA (who cared) would be able to conduct that Socratic dialogue and improve over time. Any good AI TA would not be trying to fake a Rogerian dialogue but would be thinking how to figure out what the student was trying to learn and thinking about better ways to explain or to counsel the student.
Is this possible? Sure. We stopped working on this kind of thing because of the AI winter than followed from the exaggerated claims being made about what expert systems could do in1984.
We are in danger of AI disappearing again from overblown publicity about simplistic programs.
To put this all in better perspective, I want to examine a little of what Weizenbaum was writing in 1976:
He attacked me (but started off nicely anyhow):
Roger C. Schank, an exceptionally brilliant young representative of the modern school, bases his theory on the central idea that every natural language utterance is a manifestation, an encoding of an underlying conceptual structure. Understanding an utterance means encoding it into one’s own conceptual structure.
So far so good, he said nice things and represented me accurately. But then….
Schank does not believe that an individual’s entire base of conceptions can be explicitly extricated from him. He believes only that there exists such a belief structure within each of us and that if it could be explicated, it could in principle be respond by his formalism….
There are two questions that must ultimately be confronted. First, are the conceptual bases that under linguistic understanding entirely formalizable even in principle as Schank suggests and as most workers in AI believe? Second, are there ideas that, as I suggested, “no machines will ever understand because they relate to objectives that are inappropriate for machines?” ……
It may be possible, following Schank’s procedures to construct a conceptual structure that corresponds to the meaning of the sentence, “will you come to dinner with me this evening?” But it is hard to see — and I know this is not an impossibility argument, how Schank-like schemes could possibly understand that same sentence to mean a shy young mans’ desperate longing for love.
I quoted parts of what Weizenbaum had to say because these were the kinds of questions people were thinking about in 1976 in AI. Weizenbaum eventually became anti-AI, but I always like his “dinner” question. It is very right-headed and it is the least we can ask of any AI-based TA or mentor. Can we build a program that understands what the student is feeling and what the student’s real needs are, so that we can give good advice? Good teachers do that. Why should future online teaching be worse than what good teaching is like today without computers or AI?
Do we actually have to do all this in order to build AI?
Could we simply build an automated TA/ mentor that did not do all that but still performed well enough to be useful?
These are important questions. Maybe Goel’s program did perform well enough to consider using it in MOOCs where there are thousands of students. I am not fundamentally interested in that question however.
Here is what I am interested in. Can we stop causing people to so misunderstand AI that every ELIZA-like program makes headlines and causes people to believe that the problems we were discussing in the 70’s have been solved?
The fundamental AI problems have not been solved because the money to work on them dried up in mid 80s. There are businesses and venture capitalists today who think they are investing in AI but really they are investing in something else. They are investing in superficial programs that really are ELIZA on steroids. Would it be too much to ask people to think about what people do when they engage in a conversation and build computer programs that could function as an effective model of human behavior? I hope we can get people with money to start investing in the real AI problem again. Until we do, I will be finding myself on the side of Weizenbaum when when he was being critical of his user’s reactions to ELIZA (for good reason.) We should start working on real AI or stop saying that we that are. There is nothing to be afraid of about AI, since hardly anyone is really working on it any more. Most “AI people” are just playing around with ELIZA again. It is sad really.
Weizenbaum and Colby were brilliant men. They were both asking fundamental questions about the nature of mind and the nature of what we can and cannot replicate on a computer. These are important questions. But, today, with IBM promoting something that is not that much more than ELIZA people are believing every word of it. We are in a situation where machine learning is not about learning at all, but about massive matching capabilities to produce canned responses. The real questions are the same as ever. What does it mean to have a mind? How does intelligent behavior work? What is involved in constructing an answer to a question? What is involved in comprehending a sentence? How does human memory work? How can we produce a memory on a computer that changes what it thinks with every interaction and gets reminded of something it wants to think more about? How can we get a computer to do what I am doing now — thinking, wondering, remembering, and composing?
Those are AI questions, not questions. They are not questions about how we can fool people.
3 comments:
I could not agree more
Your blogs are easily accessible and quite enlightening so keep doing the amazing work guys.essays written for you
very nice post
Post a Comment