Share and discuss this blog

Tuesday, July 19, 2016

5 questions about human intelligence that make clear AI is far from here yet

I have some questions for you:

  1. How many windows were in the house or apartment in which you lived when you were ten?
  2. Can you name all 50 states? (For Europeans: can you name every country in Europe?)
  3. What was served at your birthday party when you were 13?
  4. When you came back from your first trip abroad, how did you describe the experience to your friends?
  5. What was the most difficult interaction you ever had with a teacher and what did you learn from that experience?



Why am I asking these questions? The popular world has suddenly become obsessed with AI. Venture Capitalists have become obsessed with funding AI companies. I thought it might be helpful if we discussed I (as opposed to AI) a little bit. You can’t really expect an AI to take over the world if it isn’t intelligent. Since the media are so concerned with this impending take over, I thought I would take a shot at explaining some aspects of intelligence in humans and the properties of human memory upon which it relies that AI will have to emulate to be intelligent.

So, question 1, how does one answer it? Actually the answer is pretty simple. You need to take an imaginary walk around your dwelling and count the windows. I always used this example in my AI classes. Why? Because taking an imaginary walk around a house requires a visual memory. We can remember what things looked like, imperfectly, typically, and can find the answer. There is nothing to look up. No data to search. No “deep learning to be had.” You simply have to look. But how simple is that? Can we create a computer that can walk around its own prior visual experiences? Possibly. But the computer would have to have remembered what it saw, not in terms of pixels but in conceptual terms. (“There was a green couch in the living room, I am pretty sure.’) So, memory is visual, but it is also reconstructive. We figure the couch had to have had an end table nearby but we don’t remember it, so we imagine it and attempt to reconstruct it. People get into arguments with family members over this kind of stuff because our memories are imperfect and we reconstruct in idiosyncratic ways. An AI would need to be able to do that. (Fight with its siblings? Yes.)

I asked question 2 in my classes every year. (Former students do you remember that? Maybe you do and maybe you don’t. Can you remember why I did it?) I did it because I was trying to explain the difference between recognition and recall memory. I can’t recall a student who could actually name all 50 states. (There my have been one or two). Mostly they got 47 or 48. They usually left out Utah or Idaho or Arkansas. Why? When I pointed out the states they had missed, no one every said: I never heard of that state. They knew the names of all 50 states but in order to name them they didn’t search the web to find the list. Modern AI’s would have that list. But modern AI’s aren’t really intelligent and don’t behave the way humans do.) They can just search lists. How do people do it? They “walk around” a map that they can visualize. They go down the East Coast and they go up the West Coast. They rarely miss any of those states. It is those darn middle states that cause all the trouble. Why? Because the maps that we have stored in memory are imperfect. Memory is very important in human intelligence but we are kind of bad at at. Does this mean AI’s will beat humans at memory tasks? They might. They probably could name all the states but they would do it very differently. They could win Jeopardy but not by doing what people do wen they recall information. Does this matter? Yes it does. I am getting to why.

Question 3. Why would anyone remember what one ate at a birthday party many years ago? You might not. Part of human memory is its ability to be selective about what it remembers. Not all experiences are equally important. We need to learn from the important ones and disregard the unimportant stuff. Can AI do that? Not that I know about. “Importance” implies that one has goals. These goals drive what we pay attention to and what experiences we dwell upon as we grow up. Oh, but modern AI’s don’t grow up. They just search, and store, and search some more. They don’t get wiser from each experience. And they don’t reconstruct. I have no idea what the food was at my 13th birthday party, but that was a big occasion in my world so I can guess. I really would guess badly because the food was not the issue, the party was. (And, I have pictures, but only of the bread and the cake.) My memory helps me figure out answers, but it does not provide them. My memory is full of experiences that I have to re-interpret every time. That is what intelligence is based upon, faulty memory. So, modern AI can make better memories perhaps, but of what — words in texts? My memory is based upon emotions.That was a big day for me. I remember cousin Joanie dancing. (Or was it my girlfriend Phyllis?). I remember my grandmothers kissing me. I remember my mother’s yellow dress. Memory is like that. (And since I am male it is not shocking that I remember the females, who always held (and still do hold) a fascination for me.)

Question 4. My first trip abroad, which lasted about a month, has maybe five salient memories. One was watching my mother do business in Austria and noticing that she had failed to notice something her competitors were doing that was hurting her. A second was driving around some of Eastern Europe by myself, a drive which included me passing a farmer in a wagon in Yugoslavia and feeling him hit my car with his horsewhip. (Maybe I wasn’t supposed to pass him.) A third was meeting a girl on the plane from Vienna to Tel Aviv simply because I asked her a question (in English of course) and she was ecstatic to find someone else on the plane to talk to. (Our relationship lasted all of two weeks, but I remember it.) A fourth (this was 1967) was seeing the Israelis already building settlements on the West Bank and me wondering how exactly doing that would lead to peace.) The fifth was my visit to Venice where I was hosted by a cousin who tested my “American crudity” by asking me to eat spaghetti, assuming that I would do it wrong and her being disappointed when I didn’t. (I grew up with a lot of Italians in Brooklyn.) Why am I telling you this? Because this trip lasted a month. I can remember a little more about it but not a month’s worth of stuff. I remembered stuff that caused me to learn something important  about business, about how to meet women, about international politics, and about things I still don't understand —e.g.  the farmer with the whip.) We learn from experience. Any serious AI program would have to do the same. Too bad what we mean by AI today isn’t even close to what I am talking about.

The last question is obvious. A good teacher makes you think. I had plenty of those. I also had one who hit me. I didn’t learn much from that except to stay away from her. As I write this, I am on the way to the 90th birthday party of my PhD thesis advisor (Jacob Mey.) All my interactions with him were difficult. From each one I came out wiser. I learned from being criticized and I learned from being told I was wrong. We argued. I learned. When AI programs do that, we will have AI. Until then, not so much.

Argumentation, goals, emotion, visualization, imagination, and reconstructive memory. Stop worrying about current AI programs. Or, start worrying about them. Because they sure aren’t doing those things.







Monday, July 11, 2016

Six things computers (and people) must do in order to be considered intelligent


We hear a lot about AI these days, most of it pretty silly. It seems all to be about answering questions by key word matching and finding ads based on search. To me, AI has always been a field at the centre of which was intelligence. Here, I will list 6 things that most intelligent people can do, that no AI program can do. While Hawking and Gates are very afraid of AI, I am very afraid that no one is working on the right problems in AI any more.



1. People can make predictions about the outcome of actions


So, I could ask a person: What do you think will happen if we keep having elections for President when a large chunk of the population doesn’t like either candidate?

This would start a conversation about the current election. It might lead to an argument. It might lead to a solution. Type this in to Google or Siri or Watson and see what you get. Hint: you get newspaper articles that match on some of the words. 

Conversation is a hallmark of intelligence. Any AI system must be able to have a conversation about a complex topic. All the “deep learning” that is going on is not focusing on that very simple test of intelligence.


2. People build a conscious model of a processes in which they engage

Here is something someone might say: I keep hearing about global warming. Should I be fearful? Isn’t this just threat for those who live in coastal areas? The climate has always been changing.

This question calls for someone who has a model of global warming both now and historically to respond to it. What “AI” could do that today? Who is even trying? (Hint: it is very hard.)


3. People find out for themselves what works and what doesn’t by experimenting from time to time.


Something someone might say: I wonder how likely it is that I would get a speeding ticket if I went 120 on I-95.


A reasonable response to this might be: Where on I-95? or Why would you want to take that risk? 

Find me the AI group that is working on helping people figure out how things will turn out if they try something new.

4. We are constantly evaluating things. We attempt to  improve our ability to determine the value of something on many different dimensions

One might say: I think she is in love with me. How do I know for sure?

Typically people respond to such sentence with stories of their own lives, of love that went right or went wrong. Find me a computer that tells you a story when you are worried about something in your own life. (I did work on this problem and still do. But you can be sure Microsoft’s AI group isn't working on it.)


5. People try to analyze and diagnose problems they have to determine their cause and possible remedies.


For example: My business has had flat earnings for two years now. Should I be worried? What can I do about it?

A normal person would try to find an expert to ask these questions of. I would like to have a computer expert to ask these questions of. Google responds with a four year old article from Atlantic magazine about buying a house:


I assure that any natural language processing program that Google is working on would not fare much better here.

6. People can plan. They can do needs analysis as well as acquiring a conscious and subconscious understanding of what goals are satisfied by what plans

Example: I am thinking of moving. But I am wondering what will happen to my relationships with the people who live near me.

When computers have stories to tell and can relate an experience or concern that a person has, to something it knows about and start a reasonable conversation with them then we will have AI. I would not be afraid of that AI. I would welcome it. But, unfortunately no one is working on this. Companies are saying AI constantly we and building up expectations in people that will not be satisfied unless and until the so-called AI companies work on these six problems (and many more.) 

These six problem underlie intelligence, artificial or otherwise. Time to think about intelligence and not Markov Models that make search better.


To summarize: Intelligent people have memories. They augment those memories through daily experiences and human interactions. They don’t have knowledge stuffed into their memories, instead they learn through attempting to achieve goals they inherently have and finding that the plans they tried need to be adjusted. They get help in the form of stories from other humans, told just in time. When computers can do all this, we will have AI. Right now, we have a lot marketing and hype.

Monday, June 27, 2016

attempting to understand Bob Dylan (just like all those big firm's Natural Language Processing programs claim they can do)







Suddenly, natural language processing (NLP) is back in the news. (Oddly this is a term I made up around 1970 because I didn’t like the previous term: computational linguistics.) I should be very happy that a field in which I spent a lot of time, having a resurgence, but I am not. People say they are working on NLP but they seem to universally misunderstand the problem. To explain the problem I will discuss the meaning of some Bob Dylan lyrics. (I chose these because IBM Watson ads chose Bob Dylan to be in their commercials and Watson summarized his work as “love fades.”)

I have selected a verse from a few of what I consider to be some of his most popular songs:


Blowin' In The Wind (1963)

Yes, and how many times must a man look up
Before he can see the sky?
Yes, and how many ears must one man have
Before he can hear people cry?
Yes, and how many deaths will it take 'til he knows
That too many people have died?


What do those lyrics mean? To me, this is a song about people’s insensitivity to the plight of others. It was written when the Viet Nam War was just beginning, and Civil Rights protestors were getting killed.

What would modern day natural language programs be able to get out of this verse? That he says “yes” a lot? That some people need more ears?

Let’s look at another verse from another song:


A Hard Rain's A-Gonna Fall (1963)

Oh, what did you meet my blue-eyed son ?
Who did you meet, my darling young one?
I met a young child beside a dead pony
I met a white man who walked a black dog
I met a young woman whose body was burning
I met a young girl, she gave me a rainbow
I met one man who was wounded in love
I met another man who was wounded in hatred
And it's a hard, it's a hard, it's a hard, it's a hard
And it's a hard rain's a-gonna fall.

What is this about? To me it seems to be about the hard knocks of life and is making the prediction that things will be getting even worse. Current NLP programs would see this as being about people, I assume, and maybe rain. Would any modern NLP program be able to understand the metaphor about hard rain or giving a gift of a rainbow? I doubt it. Yet, understanding metaphor, is critical to NLP since metaphor is everywhere. (This food tastes like crap.)

Stanford offers an NLP course (via Coursera.) This is what they say about it:

This course covers a broad range of topics in natural language processing, including word and sentence tokenization, text classification and sentiment analysis, spelling correction, information extraction, parsing, meaning extraction, and question answering, We will also introduce the underlying theory from probability, statistics, and machine learning that are crucial for the field, and cover fundamental algorithms like n-gram language modeling, naive bayes and maxent classifiers, sequence models like Hidden Markov Models, probabilistic dependency and constituent parsing, and vector-space models of meaning.

So, using a lot math you can figure out that a gift of a rainbow is about helping someone appreciate the beauty around them? I guess a Hidden Markov Model would do that for you.

Here are more lyrics from another song:

The Times They Are A-Changin’ (1964)

Come writers and critics
Who prophesize with your pen
And keep your eyes wide
The chance won't come again
And don't speak too soon
For the wheel's still in spin
And there's no tellin' who
That it's namin'
For the loser now
Will be later to win
For the times they are a-changin’.

Was Dylan speaking out against the Viet Nam War here? It seems to me he was asking the media to stop reporting on the war as a wonderful glory for the U.S. and start speaking up about its horrors. How did I figure that out? I read it, thought about it, and recalled its context. Nothing miraculous.(But, imagine any of these NLP program doing that!)  To understand you need to be thinking about what something means. Would your typical modern day NLP program think this was about prophesy, or losing?


Maggie's Farm (1965)

I ain't gonna work for Maggie's pa no more
No, I ain't gonna work for Maggie's pa no more
Well, he puts his cigar
Out in your face just for kicks
His bedroom window
It is made out of bricks
The National Guard stands around his door
Ah, I ain't gonna work for Maggie's pa no more.

This is a hard one to understand, even for a person. I saw it as a song about dropping out of the system. Here is what Wikipedia says about it:

The song, essentially a protest song against protest folk, represents Dylan's transition from a folk singer who sought authenticity in traditional song-forms and activist politics to an innovative stylist whose self-exploration made him a cultural muse for a generation.

On the other hand, this biographical context provides only one of many lenses through which to interpret the text. While some may see "Maggie's Farm" as a repudiation of the protest-song tradition associated with folk music, it can also (ironically) be seen as itself a deeply political protest song. We are told, for example, that the "National Guard" stands around the farm door, and that Maggie's mother talks of "Man and God and Law." The "farm" that Dylan sings of can in this case easily represent racism, state oppression and capitalist exploitation.

How would Microsoft’s NLP group get their programs to understand this? Here is what they say about themselves:

The Redmond-based Natural Language Processing group is focused on developing efficient algorithms to process texts and to make their information accessible to computer applications. Since text can contain information at many different granularities, from simple word or token-based representations, to rich hierarchical syntactic representations, to high-level logical representations across document collections, the group seeks to work at the right level of analysis for the application concerned.

In other words, since this isn’t a document, it is unlikely that Microsoft could do anything with “Maggie’s Farm” at all. Or, maybe my own ability to process language is off and they would get that the “farm” referred to the state’s exploitation of its own people.

Let’s try another:

Rainy Day Woman # 12 & 35 (1966)

Well, they'll stone ya when you're trying to be so good
They'll stone ya just a-like they said they would
They'll stone ya when you're tryin' to go home
Then they'll stone ya when you're there all alone
But I would not feel so all alone
Everybody must get stoned.


I have always liked this song because it says two different things at the same time. To me, it says that if you try do anything at all, someone will always be trying to stop you. It also says drugs are a good solution to dealing with all this.

Maybe Google knows how to deal with this kind of thing. Here is what Google says about their NLP work:

Natural Language Processing (NLP) research at Google focuses on algorithms that apply at scale, across languages, and across domains. Our systems are used in numerous ways across Google, impacting user experience in search, mobile, apps, ads, translate and more.
Our work spans the range of traditional NLP tasks, with general-purpose syntax and semantic algorithms underpinning more specialized systems. We are particularly interested in algorithms that scale well and can be run efficiently in a highly distributed environment.

Our syntactic systems predict part-of-speech tags for each word in a given sentence, as well as morphological features such as gender and number. They also label relationships between words, such as subject, object, modification, and others. We focus on efficient algorithms that leverage large amounts of unlabeled data, and recently have incorporated neural net technology.

On the semantic side, we identify entities in free text, label them with types (such as person, location, or organization), cluster mentions of those entities within and across documents (coreference resolution), and resolve the entities to the Knowledge Graph.

Recent work has focused on incorporating multiple sources of knowledge and information to aid with analysis of text, as well as applying frame semantics at the noun phrase, sentence, and document level.

So, they would probably get the second stoned reference, but the idea that people will try to prevent anything you might do for no good reason would be lost on Google.

Finally, one more song to contemplate:

The Boxer (1970)

  I'm just a poor boy
Though my story's seldom told
I have squandered my resistance
For a pocketful of numbles
Such are promises, all lies and jest
Still a man hears what he wants to hear
And disregards the rest.


I have always liked this song a great deal. But, I cannot tell you what it is about from looking at these lyrics. Here is the rest of it:

When I left my home and family
I was no more than a boy
In the company of strangers
In the quiet of the railway station
Running scared, laying low
Seeking out the poorer quarters
Where the ragged people go
Looking for the places only they would know.

Asking only workman's wages
I come looking for a job
But I get no offers
Just a come-on from the whores on Seventh Avenue
I do declare
There were times when I was so lonesome
I took some comfort there.

Then I'm laying out my winter clothes
And wishing I was gone, going home
Where the New York City winters aren't bleeding me
Leading me
Going home.

In the clearing stands a boxer
And a fighter by his trade
And he carries the reminders
Of every glove that laid him down
And cut him till he cried out
In his anger and his shame
"I am leaving, I am leaving"
But the fighter still remains.

Seeing the entire song makes it seem to me like a song about hope. But when you Google it you find out Dylan was very interested in boxing and that Paul Simon wrote this song  as a “dig against Dylan”.

Well, who knows? I don’t really care what these songs mean. But, oddly I can’t listen to them without taking meaning from them. A song resonates because you get something out of it that stays with you. It may not teach you anything. You may not learn anything from it. But you understand it as best you can nevertheless. To understand means to figure out what words mean in a context and what ideas they are trying to convey. Notice that “ideas” are never mentioned in the write ups I have quoted above. Google is not trying to figure out what ideas are being expressed but they do expect humans and computers to “merge” sometime soon (which mean people were suddenly a lot dumber.)

The hype about NLP these days is about Siri or other imitators that haven’t a clue what you just said but can respond with some words that may or may not be relevant to you.

It would be nice if all these research firms with piles of money to spend would work on the real NLP problem, which is figuring out how humans understand what is said to them and then automatically alter their memories accordingly. When we listen to someone talk, we attempt to discern what ideas they are trying to convey and then we grow in some small way from having participated in the conversation. To put this another way, NLP is really about learning and memory, as I said 35 years ago. Too bad that nowadays we only care about selling better ads to people or answering questions about where they can find a restaurant.

The times they are a changing.

Monday, June 20, 2016

I don't care about Odysseus Mr Kelly and neither did Jimmy Cagney

I like old movies. The other day I was watching a Jimmy Cagney movie, when my mind went to one my fixations, education. What is the connection between Cagney and education? Something personal.

I attended Stuyvesant High School, which was, (and is,) a school for smart science-oriented kids for which one needs to pass a test in order to get in. I should have liked Stuyvesant I suppose, but I am sorry to say I didn’t. I was reminded of one of the reasons I didn’t by watching Jimmy Cagney. Jimmy Cagney and I had the same English teacher. (Oh come Roger, you are not that old.)

His name was Mr Kelly and he had taught at Stuyvesant High School all his life. Jimmy Cagney was born in 1899, so let’s assume he went to high school in 1917. I started Stuyvesant in 1962. So Mr Kelly had to have been there for 45 years, I suppose, and indeed he was. Was Jimmy a science superstar? No. Stuyvesant was a local school for the lower east side in New York back in those days.  Mr. Kelly used to brag about what he told Jimmy or what Jimmy had said. He was his most famous student (this, of course, included many of New York’s best and brightest for many years.)

I remember this about Mr Kelly, in part, because he tended to say it a lot. What else do I remember about Mr. Kelly’s English class? I remember he used to sit in the back of the room and in a booming voice say  “Why did Odysseus…” followed by whatever the action was. When I typed Why did Odysseus into Google, these questions came up:

Why did Odysseus leave Ithaca?
Why did Odysseus go to fight in Troy?

Now, as an adult I have spent a great deal of time in Greece. I have been to Ithaca and Troy. And, I can tell you that I simply have no idea why Odysseus did anything or why Mr Kelly, or more accurately the New York City school system, wanted me to know. And, moreover I don’t care.

Now, I realize that intellectuals like to claim that knowledge about the Ancient Greeks is important to know. I am, at least in theory, an intellectual, and I still don’t care,

Now imagine how many of our students care.

Why do we insist on teaching things that kids don’t care about and have no reason to care about?

Is this a very clever way to behave? How do students who don’t care manage to get by? Is their future made more difficult by not caring about such stuff? 

I argue that it is. I got by despite not caring about this kind of thing. Most of the school population does not get by with this attitude and so, although there is no reason to know anything about Odysseus, most kids are punished severely for not knowing because they can’t pass tests and get good grades and get into college. It is time to re-think what we do in high school. Some kids can survive it. many cannot.

I am sure that someone somewhere now wants to lecture me on what I missed out on and why I should care about Odysseus. But I care about other thing, like computers and Artificial Intelligence and how the mind works, none of which were taught at Stuyvesant High School at the time, and managed to get by just fine.


Can we please let kids choose to learn what it interests them to learn? 

Monday, June 6, 2016

A little IBM Watson irony

Last year, IBM asked me if they could produce an “art visual” with a quote of mine on it. In light of IBM’s complete disregard of the implications of the quote that they selected with respect to their claims for Watson, I thought it would be fun to show the visual here:



since the print is kind of small, here is what it says:


"number crunching can only get you so far. Intelligence, artificial or otherwise, requires knowing why things happen, what emotions they stir up, and being able to predict possible consequences of actions"

Tuesday, May 31, 2016

Is IBM trying to kill off AI research by misusing the word "cognitive?"

Welcome to the Cognitive Era, says IBM’s advertising. I have been trying to figure out what that could mean. If you look inside IBM’s site you find they are proud of Cognitive Health and Cognitive Cooking to take two examples of what the any claims they make. (I was wondering what Cognitive Elder Care. might be) I have trouble knowing what these terms mean because I know what the word cognitive means, and therefore I am finding what IBM is saying incomprehensible.

Let’s start with a brief history of the word cognitive. The field of Cognitive Psychology began in the late 60’s. Until that time, oddly enough, “how the mind works” was not a subject studied in psychology. A journal with that name started about then and I published an article in that journal in 1972 in that journal’s 3rd volume.

In 1977, I helped start the field of Cognitive Science in an attempt to join together people from disciplines other than psychology, all of whom cared about how the mind works. “Cognitive” meant: human thinking. When I started a company (1981) and called it Cognitive Systems. I was trying to say that the programs we built were modelled on human thinking. Around that time, John Searle visited my lab for a week, and wrote a somewhat nasty article featuring the Chinese Room problem that I assume was meant to be an attack on me. He was attacking what was referred to then as the strong AI hypothesis that said that if a computer could do smart things, then it was thinking. This was never my position, but Searle talked more to my students during that week then he did to me, so I guess he thought I believed in the Strong AI hypothesis. I do not. 

I think that the human mind does many things and I want to know how it does them and I want to build computer programs that operate in the same way. I am interested more in people than in machines but I think that if we copied people on a computer we could have some machines that behave intelligently. I don’t actually think the machines themselves would know what they were doing or actually would be intelligent. I used to my AI classes that I was a “fleshist.” If a person said something I would think that that person was thinking, but if a machine said the same thing, I wouldn’t think that. Others disagree with me on this, but I have never been an advocate of the strong AI hypothesis.

Why am I saying all this now? I am trying to understand what IBM could possibly mean when it uses the word cognitive and announces that we are now in the “cognitive era”. Do they think they Watson is actually thinking? I certainly hope not.

Do they think that Watson is imitating how people think in some way? I can’t believe that they think that either. No one has ever proposed that machines that can search millions of pages of text are smart. Matching key words, no matter how well you do it, is not even a human capability much less one that underlies the human ability to think. 


When AI started, they were some major people associated with it, whom, of course I knew well. Marvin Minsky as interested in people first, machines second. Allan Newell was interested in people first and machines second, Herb Simon wanted to copy chess grand masters,   rather than build chess playing machines that won by being fast at search. Even John McCarthy, with whom I never agreed about anything, was trying to copy how the mind worked. I once asked him “how can you believe that the mind happens to work using a logic system invented in the 19th century?” (McCarthy thought all knowledge representation could be done using Predicate Calculus.)

That phrase, knowledge representation, is the right thing to think about. It is the cornerstone of what AI was always all about. We need to represent knowledge in some way before we can effectively use it in a computer program. AI people have always worried about knowledge representation.

But this idea seems have disappeared in recent AI work and does not exist at all in Watson. Now AI people worry about how many pages of text they can search and how match key words and phrases. (Take a look at what IBM says that Watson does in natural language processing and you will only hear about phrase matching.)

Back to Cognitive Health. I am very interested in getting computers to be able to be helpful in health care. Do I think that they can be helpful by searching millions of pages of text? Probably. 

But there are real questions about what can be done to help people using AI. I, for one, have many questions I would like to ask about drugs and health issues, as I age, and I find that asking a doctor isn't always helpful because not all doctors the answers, and asking a computer is sometimes helpful if it can match what you asked to some text that it happens to have. As I write this I have a question about a drug I am taking that no text I can find can actually  answer. I have been able to find an expert at a major hospital to ask this question and he told me that his my father was taking it, so he certainly thought it was safe. But my questions was more subtle than that, in part because it is a new drug and often little is really known about new (and highly promoted) drugs.  I really have no one to ask


Would I like a computer to be able to answer these questions? Of course. That is what AI was supposed to be all about. We always wanted to get computers to be really helpful using everyday English backed by a great deal of knowledge of a given domain.  But if IBM keeps claiming it has solved Cognitive Health, I am wondering how many people who might want to think up about new ways to represent  knowledge about how the body works and how drugs work, might stop working on what they care about and simply assume that IBM owns the turf and that there is no reason to try and compete with them. IBM is not trying to solve the problem I care about, which is getting access to knowledge that is easily comprehensible about problems everyday people actually have. A lot of that knowledge isn’t in any computer in the first place or is in academic journals, so all the key word search in  the world really will not help the average person much.

As for Cognitive Cooking, one of my PhD students  in the 80’s wrote a program called CHEF that reasoned from prior cases in order to invent new recipes using on the ingredients you happened to have on hand.  I am sure CHEF was better than the program that IBM is selling because it was based on case-based reasoning and not on matching key words.


IBM really has to stop saying Cognitive about  everything it is trying to sell. It is hurting our future because it is very likely to serve as a deterrent to more research on knowledge representation, real natural language processing and case based reasoning. These are important problems. They have not been solved and IBM needs to stop asserting that they are by claiming Watson to be “cognitive" when it actually does no thinking at all.

Sunday, May 15, 2016

should we fear AI or just fear the people who write about AI?

Ashok Goel, a professor at Georgia Tech, made the news this week with the revelation that one of the TA’s that he used in his AI course was actually an ”AI.”  Now, I have no reason to believe that Goel was trying to do something wrongheaded. I think he was just playing around. But the media love AI stories these days and have yet again led the public on a very wrong headed journey about what AI is and what it can be.

Could a chatbot be an effective TA? It could certainly beat a rather disinterested TA. There are many real TA’s who are not really very good or very interested in what they are being made to do. But that is not an important question. The right question is whether we could make a real AI program that could perform this function as well as a very good person does? The news media want us to believe we are on our way to that. I have a different point of view. It is not that we can’t do it. It is that we aren’t actually trying to do that yet. To explain this, I need to provide some history.

The first of what are now called chatbots was called ELIZA and was written by Joseph Weizenbaum. This is well known. What I will now tell you is not so well known. Weizenbaum was a friend of mine. Ken Colby was a friend of mine as well. I know some things about all this that others may not know.

Colby and Weizenbaum were friends, at least they started out as friends. Colby was a psychiatrist who became interested in whether computers could function as therapists. I was hired by Colby (in 1968) to help him at the point where he was building PARRY. PARRY was supposed to be a paranoid patient. Colby was interested in showing that he could fool therapists who would not know if they were talking with a real person or a computer. PARRY supposedly lived in a locked ward of a hospital and it was Colby’s idea to build an effective model of paranoia and prove it was effective by fooling therapists. In essence, he was trying to pass the Turing test with PARRY.

He and Weizenbaum talked about these issues for years and at some point had a falling out. Weizenbaum wanted to show up Colby by building program that could fool people at MIT into thinking it was a Rogerian therapist. He chose Rogerian  therapists, in part, to make fun of Colby, but also because Rogerians don’t usually say that much in response to what their patients tell them.

An example of ELIZA that I found on the web is:

How do you do. Please state your problem.
>> I spend too much time with computers.
Why do you mention computers ?
>> You are one.
Does it please you to believe I am one ?
>> Yes.
You seem to be quite positive.
>> I suppose I am.
How long have you been ?


Weizenbaum found that people wound up pouring their hearts out to ELIZA and he was very upset by this. He wrote a book attacking AI, called Computer Power and Human Reason, to explain why he thought AI would never work. The irony is, of course, that Goel’s program did no more than what ELIZA did in the 60’s (possibly even less), but it is now worthy of articles in the Wall St Journal and the Washington Post. Key word analysis that enables responses previously written by people to be found and printed out, is not AI. Weizenbaum didn’t think he was building a Rogerian therapist (or doing AI) really. He was having some fun. Colby was trying to model a paranoid because he was interested in whether he could do it. He did not think he was building a real (AI) paranoid. And, I assume Goel does not think he is building a real AI TA. But the press thinks that, and the general public will soon think that, if it keeps publishing articles about things like this.

This technology is over 50 years old folks. Google uses key words, as does Facebook, as does every chatbot. There is nothing new going on. But we all laughed at ELIZA. Now this same stuff is being taken seriously.

What is the real problem? People do behave in any way that is anything remotely like what these “AI’s” do. If you tell me about a personal problem you have, I do not respond by finding a sentence in my memory that matches something you said and then saying that without knowing what it means. I think about your problem. I think about whether I have any reasonable advice to give you. Or, I ask you more questions in order to better advise you. None of this depends upon key word and canned sentences. When I do speak, I create a sentence that is very likely a sentence I have never uttered before. I am having new ideas and expressing them to you. You say your views back to me, and a conversation begins. What we are doing is exchanging thoughts, hypotheses and solutions. We are not doing key word matching.

It may be that you can make a computer that seems paranoid. Colby had a theory of paranoia which revolved around “flare” concepts like mafia, or gambling, or horses. (See his book Artificial Paranoia.) He was trying to understand both psychiatry and paranoia using an AI modeling perspective.

The artificial TA is not an attempt to understand TA’s, I assume. But, let’s think about the idea that we might actually like to build an AI TA. What would we have to do in order to build one? We would first want to see what good teachers do when presented with problem students are having. The Georgia Tech program apparently was focused on answering student questions about due dates or assignments. That probably is what TA’s actually do which makes the AI TA question a very uninteresting question. Of course, a TA can be simulated if the TA’s job is basically robotic in the first place.

But, what about creating a real AI mentor? How would we build such a thing? We would first need to study what kinds of help students seek. Then, we would have to understand how to conduct a conversation. This is not unlike the therapeutic conversation where we try to find out what the student’s actual problem was. What was the student failing to understand? When we try to help the student we would have to have a model of how effective our help was being. Does the student seem to understand something that he or she didn't get a minute ago?   A real mentor would be thinking about a better way to express his advice. More simply? More technically? A real mentor would be trying to understand if simply telling answers to the student made the best sense or whether a more Socratic dialogue made better sense. And a real TA (who cared) would be able to conduct that Socratic dialogue and improve over time. Any good AI TA would not be trying to fake a Rogerian dialogue but would be thinking how to figure out what the student was trying to learn and thinking about better ways to explain or to counsel the student.

Is this possible? Sure. We stopped working on this kind of  thing because of the AI winter than followed from the exaggerated claims being made about what expert systems could do in1984. 

We are in danger of AI disappearing again from overblown publicity about simplistic programs.

To put this all in better perspective, I want to examine a little of what Weizenbaum was writing in 1976:

He attacked me (but started off nicely anyhow):


Roger C. Schank, an exceptionally brilliant young representative of the modern school, bases his theory on the central idea that every natural language utterance is a manifestation, an encoding of an underlying conceptual structure. Understanding an utterance means encoding it into one’s own conceptual structure.

So far so good, he said nice things and represented me accurately. But then….

Schank does not believe that an individual’s entire base of conceptions can be explicitly extricated from him. He believes only that there exists such a belief structure within each of us and that if it could be explicated, it could in principle be respond by his formalism….

There are two questions that must ultimately be confronted. First, are the conceptual bases that under linguistic understanding entirely formalizable even in principle as Schank suggests and as most workers in AI believe? Second, are there ideas that, as I suggested, “no machines will ever understand because they relate to objectives that are inappropriate for machines?” ……

It may be possible, following Schank’s procedures to construct a conceptual structure that corresponds to the meaning of the sentence, “will you come to dinner with me this evening?” But it is hard to see — and I know this is not an impossibility argument, how Schank-like schemes could possibly understand that same sentence to mean a shy young mans’ desperate longing for love. 

I quoted parts of what Weizenbaum had to say because these were the kinds of questions people were thinking about in 1976 in AI. Weizenbaum eventually became anti-AI, but I always like his “dinner” question. It is very right-headed and it is the least we can ask of any AI-based TA or mentor. Can we build a program that understands what the student is feeling and what the student’s real needs are, so that we can give good advice? Good teachers do that. Why should future online teaching be worse than what good teaching is like today without computers or AI?

Do we actually have to do all this in order to build AI?

Could we simply build an automated TA/ mentor that did not do all that but still performed well enough to be useful?

These are important questions. Maybe Goel’s program did perform well enough to consider using it in MOOCs where there are thousands of students. I am not fundamentally interested in that question however.

Here is what I am interested in. Can we stop causing people to so misunderstand AI that every ELIZA-like program makes headlines and causes people to believe that the problems we were discussing in the 70’s have been solved?

The fundamental AI problems have not been solved because the money to work on them dried up in mid 80s. There are businesses and venture capitalists today who think they are investing in AI but really they are investing in something else.  They are investing in superficial programs that really are ELIZA on steroids. Would it be too much to ask people to think about what people do when they engage in a  conversation and build computer programs that could function  as an effective model of human behavior? I hope we can get people with money to start investing in the real AI problem again. Until we do, I will be finding myself on the side of Weizenbaum when when he was being critical of his user’s  reactions to ELIZA (for good reason.) We should start working on real AI or stop saying that we that are. There is nothing to be afraid of about AI, since hardly anyone is really working on it any more. Most “AI people” are just playing around with ELIZA again. It is sad really.

Weizenbaum and Colby were brilliant men. They were both asking fundamental questions about the nature of mind and the nature of what we can and cannot replicate on a computer. These are important questions. But, today, with IBM promoting something that is not that much more than ELIZA people are believing every word of it. We are in a situation where machine learning is not about learning at all, but about massive matching capabilities to produce canned responses.  The real questions are the same as ever. What does it mean to have a mind? How does intelligent behavior work? What is involved in constructing an answer to a question? What is involved in comprehending a sentence? How does human memory work? How can we produce a memory on a computer that changes what it thinks with every interaction and gets reminded of something it wants to think more about? How can we get a  computer to do what I am doing now — thinking, wondering, remembering, and composing?


Those are AI questions, not questions. They are not questions about how we can fool people.