Share and discuss this blog

Sunday, January 17, 2016

IBM is at it again; more lies about Watson and AI

IBM is at it again. They are still pushing Watson and still being fraudulent about it. Let’s look at the commercial that aired this weekend: 

Watson: Ashley Bryant,  a teacher of small children.  

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  

AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development, and geography. 

AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.

Every single line of this is nonsense, so let’s take it line by line:

Watson: Ashley Bryant, a teacher of small children.  

How would Watson know it was talking to this woman? Does Watson know her? What would it mean to know her? Did Watson happen to recognize her when she walked up to a computer? How would that have happened exactly? AI can do some of this, but recognizing a person you have never met is complicated, and at this point beyond the abilities of AI except in a very superficial way.

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  

Oh, Watson has has it? Where exactly did it read that? How did it choose to say that as opposed to anything else it might have read about being a teacher? How did it know that this might be a reasonable thing to say? Does it have a conversational model of small talk with strangers? Why didn’t it say “teaching is something teachers do”? It probably read that too. Or “I read about someone who hated their teacher”? What mental model does Watson have that helps it select what to say from everything it has “read” that contains the word “teacher?” 

When people meet teachers in a new setting is this something that they likely to say to them? Wouldn’t that be a kind of condescending remark? Does Watson understand condescension? Does Watson understand intentions and goals in a conversation? NO. IBM is just pretending. IBM is making it all up. They are not working on how the mind works, how conversation works, or on AI really. They just like saying that that is what they are doing so they sell Watson.

AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

Really? It should be able to offer advice from me then. I have knowledge about teaching that includes that when someone comes to talk with you, they typically have a reason for doing so, and that a good teacher asks what someone is thinking about or what their problems might be. Even in this fictional conversation, Watson makes clear that it has no idea how teaching or learning work at all. The right answer to “that’s why I am here” would be to ask about her problem, not to make grandiose claims about things it can’t do, namely match what it has stored as text to what her real problem is. Asking the right question at the right time is one of the hallmarks of intelligence and good teaching and is way beyond anything Watson can do.

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development and geography. 

Oh it can can it? Curricula are actually very difficult to build and doing so requires a sense of what a student might want to learn and best ways to get them challenged and excited. Knowing the geographical placement of the student is sometimes relevant but hardly a major issue. Measurements of aptitude are tricky. Is Watson going to make a curriculum based on a student’s SAT scores? Watson probably can provide more math problems to a student who got some wrong answers. That is probably something Watson can do, but it is not AI and does not take intelligence to do. A good curriculum designer tries to figure out what is hard to comprehend and tries to make learning more fun, more engaging, more challenging and more relevant to the individual goals of the students. Is that what Watson can do? Of course not. It can make absurd claims however (or the people who wrote the commercial can.) 

I heard about a company that uses Watson to help in education. It takes Wikipedia pages (or any text) and turns them into tests. A marvelous innovation. So, Watson can take a bunch of words and make up test questions about the. Thats what it can do in education. And, by the way, that doesn't require understanding what the questions are even about, which is good because Watson understands nothing.

AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.

Why can’t Watson look up the word “quiet” and take words of wisdom from the accumulated knowledge of educators on the value of quiet?

Because this is a commercial, and like most commercials it is selling based on minimal truth. IBM should know better and it should stop doing this. They are trying to convince the world that computers are smarter than they are, that AI has succeeded far better than it has, and that they, IBM know a lot about AI. All they seem to know about AI is how to retrieve text and lie about it.

Time to stop this crap IBM.

Tuesday, December 22, 2015

Corporate training needs to re-think its model; no to courses and assessment, yes to experiences

In 1989, I witnessed corporate training for the first time. We had just started the Institute for The Learning Sciences, which was sponsored by Andersen Consulting (now Accenture). I visited their campus in St. Charles, Illinois, and saw many classrooms full of people who were mostly half asleep or in a daze from being talked at about corporate culture, and Andersen’s core values, and client needs, and so on.

But, they had spent a lot of money on me so they were willing to listen. Early on, they said we could re-do any course they offered there except one, which was the entry-level boot camp. So, that being the one that they thought was already perfect, that was the one I selected.

I saw immediately why they loved it. It simulated the experience of being on a week-long assignment with little or no sleep having to get technical work done with a new team in a role you did not know. That was what life was like at Andersen so they had effectively thrown their new hires into a sink or swim environment. I loved it. But, they were also trying to teach COBOL in that environment. The new hires had no actual job to accomplish other than to learn COBOL. I asked them why they thought learning COBOL under stressful conditions was likely to produce competent programmers.

I have had plenty of opportunity to look at corporate training since then, and my team and I have built quite a bit of it. So, now I want to say something easy to understand about corporate training: STOP IT.

Some things to stop doing:

  1. delivering content

Once a company decides to pursue a global training program and reaches shared agreement on how to define and develop the initiative, the instructional design team can take the next step: deciding what content to offer and how to design instruction to deliver it.

Content cannot be delivered. If it could, you could hire FedEx to do it. Content is usually spoken about as if it were a physical object. It isn’t. But, content is not a physical object.  In a book, the content is the same no matter whom it is delivered to. Socrates warned about this when the written word became commonplace saying that the words can’t defend themselves. But when we talk with one another we say different things to different people and respond to what they ask about and try again. Still content I suppose, but now it can’t be delivered in that FedEx (or MOOC) sort of way.

What needs to be “delivered” are dialogues with individuals. As it is now, training is like trying to argue with the package the Fed Ex man delivered.

2. expecting people to remember what you tell them

Montaigne, the great French philosopher said of his teachers: “They never ceased to scream in one’s ears as if they were pouring something into a barrel and one’s task was only to repeat what one was told.”

But we keep talking at people. If your training includes talking, then it isn’t working. If it includes Power Point it isn’t working. If it includes a classroom, it isn’t working. It has always fascinated me that corporate training looks so much like school (with classrooms!) Did you learn a lot in school? Did you like sitting in a classroom? For those who think they did, I suggest taking any college exam that you took once before and seeing if you could pass it now. School is based on the concept of retention of information. But, oddly, we don’t retain any information if we don’t regularly use that information. And if we do regularly use it, we do not have to attempt to retain it. We can’t help but retain it.

3. creating courses

Now I need to be careful here because my customers think we create courses for them. But we don’t really. We create experiences. There is a difference.

What is the difference? I was once consulting with United Airlines and I got to fly one of their big simulators. I don’t know how to fly a plane and I didn’t learn from 20 minutes in the simulator. In fact, I crashed on my first landing attempt. Could I have taken a course and learned what to do and what not to do? I guess I could have. But a better choice would have been to keep on trying and have a mentor sitting next to me to advise on what I was doing right and what I was doing wrong. When I advised the Army about their tank simulators (which are great) I was advising them to how to arrange things so that important mistakes are more easily made. You don’t learn that much from getting things right. You need to practice experiences in which you are likely to fail. These are not courses. They are a very different approach to education than courses. Try things out. Fail. Have someone around with whom you can discuss and get advice.

We often consult with companies about their courses. Here is what we usually find:

Many learners will go through an entire course when they will only do a few of the things taught in the their actual jobs.  One course for everyone is a waste of time and focus. Training people often have an obsession with complete coverage when such coverage is likely to be unimportant. Training should focus, first, on things people actually need to be able to do and, second, within those things on which they make mistakes.

There is often eLearning material. It regularly tries to cover far too much material in advance of when the learner actually needs to know it.  Training departments create videos, MOOCs, documents,  and think this will help people learn.  The critical skills the company wants to teach are lost in the details.  More time and money wasted.  
There are always assessment questions. Why? Because the training department wants to know if the students have “learned the material.” What the training department should really want to know is what the student can do that they couldn’t do before the training. They misunderstand assessment: measure abilities not facts. There is often a misalignment between the business and the training department. The business wants to know if people can do things, while the training department may want things that are easy to measure. 


It is the very concept of a course (and an assessment of facts delivered) that is the problem here. 

So, if we shouldn’t deliver content, or expect people to retain information or create courses, what should corporate training be?

Think about this:

When you went to summer camp, or boy scout camp, or anything similar, did you learn stuff? Did you think you were attending a course, or that content was being delivered? Or were you just doing stuff that was fun and learning new skills as you went along?

When you hang out with friends and decide to do a project together, or have an adventure together, do you learn stuff? You certainly learn about how to get along with each other, maybe how to work with each other, and maybe you learn to do things from one another. Is this a course? Is content being delivered?

Now let’s reconsider Accenture’s Boot Camp. Why did they have it? Because they needed people to understand how to work together under certain kinds of stressful conditions. This is what the Army does when it teaches new recruits how to be soldiers in simulated combat situations (when they aren’t marching and listening to speeches.)

When EPA coordinators realized they didn't know how to run a public meeting we built a simulated meeting for them to run. We didn’t build a course. When ISO realized it didn’t know how to train new people to be capable of dealing with ISO standards in a new country, we built a fictional country for them to operate in and multiple situations to deal with. When we were working on child development we had people run a simulated day care center so they deal with a variety of situations, problematic kids, and annoying parents. When we heard that cancer doctors didn't know how to conduct difficult conversations with their patients we created situations where they needed to advise colleagues who had just done it badly.

None of these are courses. None of these delivered content.

Corporate Training Departments need to adapt to the real world. Here is something I read the other day. At first glance, it looks entirely reasonable.

1 Guiding participants through data gathering with their team, structured assessments, learning, application, and reflection.

2 Ensuring accountability by forming a partnership between participants and their direct managers.  

3 Integrating deliberate practice into the journey, making on-the-job application a structured and tracked element of the program.
4 Providing technology that makes it easy for both participants and managers to play along with the journey, track progress, and share realizations and insights.
5 Providing results so that business leaders can see how skills grow as participants move through the program.

What is this? This is part of an announcement by a company in the corporate training space saying what their wonderful new product will do. I noticed it because it was written by a former PhD student of mine. He is a smart guy and I am sure he didn’t get dumber since he worked for me. But the world of corporate training has gotten dumber. As I pointed out in an earlier Outrage if we don’t say assessment, gamification, or social media, in every sentence, no one is happy.

On the surface, what he wrote doesn’t seem that bad. He is trying to sell stuff that people probably want to buy, so I can see that the average reader would think he is offering good stuff. Let me explain why I don’t think that is the case.

The underlying theme is assessment driving learning. “Ensuring accountability,”  “providing results to see how skills grow,” “tracking progress,” and “structured assessments,” are what he is excited about. Now I doubt he really is excited about that. In fact, he co-authored a book with me on how to get rid of assessments in education. Here is a quote from that book:

School cannot be changed in any important way until fixed curricula are eliminated.  But fixed curricula will not be eliminated until we change the way we assess progress. Bear in mind that adults take courses in subjects for which they pay money — courses in photography, weight loss, yoga, home repair — there is no test at the end. There isn’t a need for a test because students are their own masters. They set their own standards and rather than having themselves be judged, they judge the teacher. 

Corporate Training has to stop doing what school does, namely looking to provide numbers so that some other part of the business can say that someone learned something. Give your employees opportunities to really learn to do things and then you can report on actual accomplishments. Remember the real reason schools give tests and grades. It is because the parents are obsessed with their kids getting into college and colleges do not want to look at each student as an individual because they have too many applicants. So they demand tests about the quadratic equation or physics formulas that simply don’t matter to the average student. Corporations need to be smarter  than this. They can treat students as individuals and not as numbers. They don’t have to deal with choosing from tens of thousands of applicants. They just need to help the people they already have perform better. They have adopted the school model and they are suffering as a result.

Corporations can do this by providing employees with things they would really like to learn to do. Treat your people like grown ups. You can measure grown ups the way we have always measured each other. We promote employees we think do a good job. We are friends with people we think are fun to hang out with. We select as mentors people who always seem to give us good advice. Look at your people and help them do their jobs better. Mentor them. Stop assessing them.

We need to get rid of the corporate training mindset that has taken some of my best students and corrupted them.  The system wants to make training boring and wants to constantly measure progress. How sad.  This doesn't work in school and it doesn’t work for business.  

So, enough with our current corporate training model. Create situations that are like ones employees will need to deal with and have them try them out in a simulations (live or on a computer) before they try them out for real. Create experiences not courses.

Monday, November 30, 2015

key word analysis is not AI; one more time to get the point across to people building "conscious" computers

I am getting tired of talking about this, but there was yet another piece of stupidity published the other day.

As advancements in technology continue at an ever-increasing pace, will there ever come a day when we’ll be able to use science to cheat death? Australian startup company Humai seems to think so; it claims to be working on a way to transfer a person’s consciousness into an artificial body after they’ve died.
“We want to bring you back to life after you die,” says Humai CEO Josh Bocanegra on the company’s website. “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human. Using cloning technology, we will restore the brain as it matures.”

Really? OK, I am not even going to comment on this nonsense. This column is about key words.  I have had enough with claims about AI based on key word analysis, so I thought I would explain it once again, in a way that anyone outside of AI could understand.

Consider this: What does the proverb : a pig with two masters will soon starve mean?” While you are pondering that, I will mention two more proverbs to think about:

A stitch in time saves nine

You can lead a horse to water but you can’t make him drink

Understanding how we understand these proverbs will make clear why key word analysis isn’t going to lead to robot consciousness or discoveries in cancer or new Bob Dylan songs any time soon.

I have learned, (because I ask people about these in job interviews actually), that many adults have no idea what these proverbs mean and can’t explain them at all. One reason is that they may never have heard them before, but that is the key word analysis answer. “I never heard it, so I can’t look it up and say what I found.”  

In actuality, anyone who thinks hard can figure out what these mean. No computer can do that. But, remember that I am an AI person. I would like computers to be able to do this too, so I have thought quite a bit about it. Let me make clear what a person has to do in order to decipher the meaning of these proverbs. As I do this, think about how hard this would be for a computer to do.

Let’s start with the pig. English language proverbs are quite often said in farming metaphors (sailing is big too.) The first question is: why would a pig with two masters soon starve? It is a good question. Suppose it were a question on Jeopardy. Watson would lose. A smart person would win. Why? Because people who think don’t match key words. (They don’t ask themselves: where can I find a text where pig and starve are on the same page or how often are these words correlated?) What they do ask themselves is how having two masters would affect the pig. They also ask themselves other things, because sentences like this occur in actual contexts usually: 

Why is this guy talking to me about pigs? We weren’t discussing pigs.

(What is the guy who said this trying to say? We were taking about my life situation and now he is talking pigs. He must be making an analogy.)

Why would the pig starve? Well, who feeds the pig normally? Aha. Either of the masters might feed the pig. Well, what if each one thought the other was doing the feeding? Now, I get it. He is not talking about pigs at all. I have two bosses. He was telling me that neither may think they need to look out for me.

This is not rocket science. It is in fact, everyday human thinking. But such thinking is way out of bounds for what AI can do today. Tomorrow maybe. But that tomorrow would require that the computer would be able to have a conversation where one person’s goals were being discussed, where another person was giving advice that the other might follow, and where that the advice was being said metaphorically using a well known proverb.   This is what thinking looks like. It is not what key word analysis looks like.

To understand the advice given one must have a goal and ask oneself questions about how what was said relates to that goal and then figure out the answer. Could computers do that? I hope some day they can. Waston? Not so much. How about the above mentioned conscious robot company? Give me a break. We barely know what consciousness is, although I am pretty sure it has something to do with the stuff I just put in italics above.

I will let my readers figure out the stitch in time proverb themselves. Also I challenge the brilliant “AI” people at IBM give Watson a shot at it. Please let me know how it did. 

Let’s move on to the drinking horse. Why can’t you make him drink? Isn’t he thirsty? But, of course this proverb isn’t about horses. It is about education typically. It means that you can teach people but they don’t necessarily learn. Let your key word analyzer figure that out. How do I know that this is what that proverb is about? Because life is full of situations in which we try to help somebody and they refuse the help that is being offered. They don't agree, or they don’t care (or they aren’t thirsty.) You need to figure this out if have never heard the proverb before. So unless our key word analyzer has a key proverb analyzer too, the key word analyzer would be baffled by this. And if we did list the underlying meaning of every proverb in the English language, the program still wouldn't understand it, because the proverb is about goals and plans and decisions we make, and about how to learn to think differently. This is exactly what we are not yet able to do in AI, much as I would like for us to be able to do that. The AI winter that started in 1984 killed all the work on that kind of AI. That is the consequence of making ridiculous claims about what AI can do.

I will end on a joke I like: You can lead a horse to water but a pencil must be led. Watson: why is that funny? Let me know when Watson or our conscious computer has figured out the answer to that.

Thursday, November 19, 2015

The fraudulent claims made by IBM about Watson and AI. They are not doing "cognitive computing" no matter how many time they say they are.

I was chatting with an old friend yesterday and he reminded me of a conversation we had nearly 50 years ago. I tried to explain to him what I did for living and he was trying to understand why getting computers to understand was more complicated than key word analysis. I explained about concepts underlying sentences and explained that sentences used words but that people really didn’t use words in their minds except to get to the underlying ideas and that computers were having a hard time with that.

Fifty years later, key words are still dominating the thoughts of people who try to get computers to deal with language. But, this time, the key word people have deceived the general public by making claims that this  is thinking, that AI is here, and that, by the way we should be very afraid, or very excited, I forget which.

We were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do. (This was about promoting expert systems. Where are they now?). Funding dried up and real work on natural language processing died too.

But still people promote key word because Google and others use it to do “search”. Search is all well and good when we are counting words, which is what data analytics and machine learning are really all about. Of course, once you count words you can do all kinds of correlations and users can learn about what words often connect to each other and make use of that information. But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.

But, I am not worried about Google. It works well enough for our needs.

What I am concerned about are the exaggerated claims being made by IBM about their Watson program. Recently they ran an ad featuring Bob Dylan which made laugh, or would have, if had made not me so angry. I will say it clearly: Watson is a fraud. I am not saying that it can’t crunch words, and there may well be value in that to some people. But the ads are fraudulent.

Here is something from Ad Week:

The computer brags it can read 800 million pages per second, identifying key themes in Dylan's work, like "time passes" and "love fades.”

Ann Rubin, IBM's vp of branded content and global creative, told Adweek that the commercials were needed to help people understand the new world of cognitive computing.

"We're focusing on the advertising here, but this is really more than an advertising campaign," Rubin said. "It's a point of view that IBM has, and it's going across all of our marketing, our internal communications, how we engage sellers and our employees. It's really across everything that we do.”

IBM says the latest series is meant to help a broader audience—companies, decision makers and software developers—better understand how Watson works. Unlike traditionally programmed computers, cognitive systems such as Watson understand, reason and learn. The company says industries such as banking, insurance, healthcare and retail can all benefit.

Rubin said Watson's abilities "outthink" human brains in areas where finding insights and connections can be difficult due to the abundance of data.

"You can outthink cancer, outthink risk, outthink doubt, outthink competitors if you embrace this idea of cognitive computing," she said.

Really? I am a child of the 60s’ and I remember Dylan’s songs well enough. Ask anyone from that era about who Bob Dylan was and no one will tell you his main them was love fades. He was a protest singer, and a singer about the hard knocks of life. He was part of the anti-war movement. Love fades? That would be a dumb computer counting words. How would Watson see that many of Dylan’s songs were part of the anti-war movement? Does he say anti-war a lot? Probably he never said it in a song.

This is from this site: 

In our No. 1 Bob Dylan protest song, 'The Times They Are a-Changin,' Dylan went all out and combined the folk protest movement of the 1960's with the civil rights movement. The shorter verses piled upon one another in a powerful way, and lyrics like, "There’s a battle outside and it is ragin’ / It’ll soon shake your windows and rattle your walls / For the times they are a-changin'," are iconic Dylan statements that manage to transcend the times.
But he doesn’t mention Viet Nam or Civil Rights. So Watson wouldn't know that he had anything to do with those issues. It is possible to talk about something and have the words themselves not be very telling. Background knowledge matters a lot. I asked a 20 something about Bob Dylan a few days ago and he had never heard of him. He didn’t know much about the 60’s. Neither does Watson. You can’t understand words if you don’t know their context.

Suppose I told you that I heard a friend was buying a lot of sleeping pills and I was worried. Would Watson say I hear you are thinking about suicide? Would Watson suggest we hurry over and talk to our friend about their problems? Of course not. People understand in context because they know about the world and real issues in people’s lives. They don’t count words. 

Here is more from that site:

Saying that Bob Dylan is the father of folk music is probably overstepping a bit. However, saying that the vocalist is one of the most prominent writers of anti-war and protest songs in the 20th century is spot on, thus making him worthy of a Top 10 Bob Dylan Protest Songs list. The singer did change his range from anti-establishment to country to pop and back to folk again, and he remains a seminal force for those who rage against “The Man.”   

That was written by a human. How do I know? Because Watson can’t draw real conclusions by counting words in 800 million pages of text.

Of course, what upsets me most is not Watson but what IBM actually says. From the quote above:

Unlike traditionally programmed computers, cognitive systems such as Watson understand, reason and learn. 

Ann Rubin, IBM's vp of branded content and global creative, told Adweek that the commercials were needed to help people understand the new world of cognitive computing.

I wrote a book called The Cognitive Computer in 1984:

I started a company called Cognitive Systems in 1981. The things I was talking about then clearly have not been read by IBM (although they seem to like the words I used.) Watson is not reasoning. You can only reason if you have goals, plans, and ways of attaining them, and a comprehension of the beliefs that others may have and a knowledge of past experiences to reason from. A point f view helps too. What is Watson’s view on ISIS for example? 

Dumb question? Actual thinking entities have a point of view about ISIS. Dog’s don’s but Watson isn't as smart as a dog either. (The dog knows how to get my attention for example.)

I invented a field called Case Based Reasoning in the 80’s which was meant to enable computers to compare new situations to old ones and then modify what the computer knew as a result. We were able to build some useful systems. And we learned a lot about human learning. Did I think we had created computers that were now going to outthink people or soon become conscious? Of course not. I thought we had begun to create computers that would be more useful to people. 

It would be nice if IBM would tone down the hype and let people know what Watson can actually do can and stop making up nonsense about love fading and out thinking cancer. IBM is simply lying now and they need to stop.

AI winter is coming soon.

Friday, November 6, 2015

Learning and Technology buzz words examined in order to enable massive peer to peer learning

I have just returned from another learning and technology conference where the people there used so many buzz words concerning the latest learning solutions that I am beginning to think madmen have taken over the field.

Here are some of my favorites with some comments:

Massification: this means that we are now dissatisfied with the idea that we can only stuff 1000 people into a classroom to hear a boring lecture, so now we want millions in a virtual classroom. Wow! A big improvement! Think of the money we can save/make.

Flipped Classrooms: since classrooms are such a good idea, we should make them in a new way that allows people to be bored by the lecture at some other time and then have a discussion about a lecture they didn’t care about (and probably didn’t watch) in the first place.

Social Construction of Knowledge: apparently we can’t know anything without discussing it on social media. Then, we know what everyone thinks about it. This is perhaps why people post pictures of what they had for lunch so they can make they sure their knowledge of what they had for lunch was correct and the lunch actually happened.

E-assessment: this is very important because if we didn’t assess everybody then how would we know that they know what we told them to know. (We don’t really care if they can do anything with all this knowledge.) We must turn all materials into quizzes as efficiently as we can so that people can pass tests and then immediately forget what was on that test knowing they will never need that stuff again. Do it online, and it is way better. I am not sure why. But I know we need to assess everything all the time and do it fast.

Learning Incentives. I hear that if you put really boring material in an animation then it becomes less boring. I hear that if you pay people to get good grades then they will learn more. I hear that if you offer people a good meal they will learn stuff whenever they are hungry.

Learning Analytics. This means that we can know how you learned, what you learned, and when you learned it. I am still trying to figure out when I learned that learning has become a moronic field. I have analyzed it but I don’t have enough data.

Blended Learning. This means we will do the crap we always did, but some of it will be online.

Nano-learning. This means that no one wants to take a course any more, which is a good thing because courses are usually quite tedious, especially when they are full of “content.” So, now people want to learn in small chunks. The next time I want someone to be my doctor I will inform him that I need to make sure that he learned to be a doctor through nano learning. We will see how well that works out.

Content. This is stuff we need to put in online courses. It is the same stuff we had in the courses that weren't online before. It is usually massive amounts of text. Why reading text on a computer is better than reading it in book is not something I can explain, but I am sure that content is king, and there must be a lot of it. We can ignore it as we always did. The idea that the computer allows you to do things rather than read things has apparently not been considered.

Learning Styles: everyone learn differently, or so I hear. So, that means there are some people who don’t learn by trying something out, figuring out what they did well and what they did wrong, and then possibly getting help from others, and then trying again. I wonder who those people are.

Collaborative Learning. This is when people learn together rather than learn in a world where only they exist. In the world that I know, where other people do exist, all real learning is collaborative, so this shouldn’t be worth mentioning. Even if you figure something out all by yourself, you will be telling someone, they will be reacting, and you will change your point of view slightly. When I said that all learning is a conversation, apparently what I meant to say is that it is collaborative. That is a very nice word.

Learning hasn’t changed. We all (including other mammals) learn in the same way, by trying things out, and hopefully getting one-on-one teaching from a parent or mentor when we are in trouble and need help. Learning has always been like that, and always will be like that. It is school that needs to change, not learning. Learning needs technology to the extent that it can transform school-based ideas into the way we always learned before there was technology and school. But, silly me, I thought that all these people who work on learning would know better than to copy school and then think they can improve learning by adding technology and cute buzz words. I was wrong.

Oh, I left out badges. We don’t need no stinkin’ badges.

Monday, October 19, 2015

Bob Dylan, IBM, Watson and the lies surrounding massive text processing

Bob Dylan must be in trouble. He has made a commercial for IBM. Actually I kind of like IBM. They were a leader in my world for a long time and did many good things. But, now they are hawking Watson. Fortune had this to say:

Bob Dylan gets tangled up in Big Blue

The folk rock icon stars in an IBM  IBM 0.20%  commercial that premiered this week in which he talks with the Watson supercomputer about music, love, and, of course, its knack for coming up with smart answers. IBM is trying to push Watson as a key technology service that could help put an end the company’s declining revenues.

(You can see the ad in the Fortune article.)

The problem is that Watson is a big lie. Yes, it can process text and discover that Dylan says the word "love" a great deal. But processing massive amounts of text and discovering statistical patterns is not the same as understanding the text and Watson is certainly not learning anything more than what words show up together. IBM concocted one of those "conversations" between Dylan and Watson which sounds like someone might understand what the other is saying. It is all an attempt to find a use for Watson to help in processing massive amounts of text. But, as I said in an earlier column, people don't really learn by reading and computers certainly don't. Watson is not an AI machine no matter how often IBM says it is.

Friday, October 9, 2015

Stop school shootings. Get rid of school.

When people discuss school shootings, they discuss gun control and mental health issues. Curiously they leave out what I consider to be the primary problem: school.

Not everyone hates school. Not everyone is angry at schools that rejected them, told them they were losers, bored them and so on. Those are not reasons to shoot anyone. Well, maybe they are for some people.  Let’s assume that you are a person with issues, not the happiest, best looking, most loved, most admired, smartest, or most together kid in the school. What happens to such people?

they are made fun of
they are ostracized
they are made to feel stupid
they are ridiculed
they are told they can’t do thing something they wanted to do
they don't make the team
they don't get into a club
they don’t get into the next school they may have wanted to attend.

All of school is a contest. It is a contest for who gets to go to Harvard, who is on student council, who is the teacher’s pet, who is the best looking, who has the most friends, and so on. There are losers of these contests. In fact, most people who go to school feel like losers a lot of the time. There are prizes they didn't get, special trips they couldn't go on, bad grades that make them feel dumb, kids who won’t be friends with them, guys who got the girl they wanted. Kids everywhere want to fit in, and there are usually other kids who want to keep them out. Teachers have kids they like better than other kids. It is only natural. So, there are kids who feel that their teacher doesn't like them. There are kids who have nice clothing and those that ridicule them because they aren’t wearing the latest fashion, Kids are ridiculed because they don't agree with the majority on whatever is being discussed. Kids are ridiculed because they are different or weird, too smart, or too dumb.

In fact, school can be a nightmare for some kids — a torture chamber.

And we are surprised that there are so many school shootings. I am surprised there aren’t more.

You don’t have to attend a school to think that a school would be good to shoot up if you have mental problems, are lonely,  and have lots of anger. It is the obvious place to choose. It is where you were miserable. It is where your problems started. It doesn’t even have to be the school you attended.

But we insist on attributing school shootings to gun control and mental heath issues. Of course, these are big issues, but I would just like one newscaster or commentator to point out that school is an awful place for many of the students there and it makes them angry.

The solution? Shut school down. (Keep providing the daycare possibilities. With two parents working, kids staying at home isn’t all that likely any more.) 

How do we do this? It wouldn’t be that difficult. We live in the age of the internet. A kid could learn anything he or she wanted to learn easily enough. We would just need to build some interesting things for them to learn and do in virtual worlds that they find fascinating. They can work in virtual groups of other kids with online mentors. Then, if they were unhappy they could stop what they are doing and do something else. We need to stop having mandatory courses that everyone must take and let kids do what they want.

Who is at fault for school shootings? A government that hasn't a clue about school, boring courses, mean kids, competition, and the awful effect of it all on children.

Read my latest book: