Sunday, January 17, 2016

IBM is at it again; more lies about Watson and AI

IBM is at it again. They are still pushing Watson and still being fraudulent about it. Let’s look at the commercial that aired this weekend: 

Watson: Ashley Bryant,  a teacher of small children.  

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  

AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development, and geography. 

AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.


Every single line of this is nonsense, so let’s take it line by line:


Watson: Ashley Bryant, a teacher of small children.  

How would Watson know it was talking to this woman? Does Watson know her? What would it mean to know her? Did Watson happen to recognize her when she walked up to a computer? How would that have happened exactly? AI can do some of this, but recognizing a person you have never met is complicated, and at this point beyond the abilities of AI except in a very superficial way.

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  


Oh, Watson has has it? Where exactly did it read that? How did it choose to say that as opposed to anything else it might have read about being a teacher? How did it know that this might be a reasonable thing to say? Does it have a conversational model of small talk with strangers? Why didn’t it say “teaching is something teachers do”? It probably read that too. Or “I read about someone who hated their teacher”? What mental model does Watson have that helps it select what to say from everything it has “read” that contains the word “teacher?” 

When people meet teachers in a new setting is this something that they likely to say to them? Wouldn’t that be a kind of condescending remark? Does Watson understand condescension? Does Watson understand intentions and goals in a conversation? NO. IBM is just pretending. IBM is making it all up. They are not working on how the mind works, how conversation works, or on AI really. They just like saying that that is what they are doing so they sell Watson.


AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

Really? It should be able to offer advice from me then. I have knowledge about teaching that includes that when someone comes to talk with you, they typically have a reason for doing so, and that a good teacher asks what someone is thinking about or what their problems might be. Even in this fictional conversation, Watson makes clear that it has no idea how teaching or learning work at all. The right answer to “that’s why I am here” would be to ask about her problem, not to make grandiose claims about things it can’t do, namely match what it has stored as text to what her real problem is. Asking the right question at the right time is one of the hallmarks of intelligence and good teaching and is way beyond anything Watson can do.

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development and geography. 

Oh it can can it? Curricula are actually very difficult to build and doing so requires a sense of what a student might want to learn and best ways to get them challenged and excited. Knowing the geographical placement of the student is sometimes relevant but hardly a major issue. Measurements of aptitude are tricky. Is Watson going to make a curriculum based on a student’s SAT scores? Watson probably can provide more math problems to a student who got some wrong answers. That is probably something Watson can do, but it is not AI and does not take intelligence to do. A good curriculum designer tries to figure out what is hard to comprehend and tries to make learning more fun, more engaging, more challenging and more relevant to the individual goals of the students. Is that what Watson can do? Of course not. It can make absurd claims however (or the people who wrote the commercial can.) 

I heard about a company that uses Watson to help in education. It takes Wikipedia pages (or any text) and turns them into tests. A marvelous innovation. So, Watson can take a bunch of words and make up test questions about the. Thats what it can do in education. And, by the way, that doesn't require understanding what the questions are even about, which is good because Watson understands nothing.


AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.

Why can’t Watson look up the word “quiet” and take words of wisdom from the accumulated knowledge of educators on the value of quiet?

Because this is a commercial, and like most commercials it is selling based on minimal truth. IBM should know better and it should stop doing this. They are trying to convince the world that computers are smarter than they are, that AI has succeeded far better than it has, and that they, IBM know a lot about AI. All they seem to know about AI is how to retrieve text and lie about it.

Time to stop this crap IBM.