About 40 years ago I was having lunch with a Yale colleague (Bob Abelson — a famous social psychologist) who was also my closest friend at the time.
I complained to him that my wife could’t cook steak rare. (I hate overcooked meat.) He replied that back in 50’s he was in the U.K. and he couldn’t get his hair cut as short as he wanted it. (Crew cuts were in style in the U.S. then but not in the U.K.)
That’s all there is to know about learning.
Huh?
- learning starts with a conversation
- the first speaker has a problem and wants help thinking his problem out
- the listener relates his friend's problem to a problem of his own
- the link is through an explanation that the listener thinks might be the explanation both parties are seeking
- so he replies to a story with a story
Underneath all this are some simple truths
- Learning starts with curiosity. (This is one reason that school is really not a good way to educate. If I need to be curious in order to learn, school would have to try to relate to something I am already curious about — but how could that happen with fixed curricula and many students in class each of which is curious about different things?} School can try to make me curious. (But really how many people are curious about algebra? Actually I was one of those who was curious about algebra. Four years of being a math major convinced me to become curious about something else, in my case computers and human thinking.)
- Listening can only work if the listener is curious too. A listener may not be curious about what the speaker is curious about but the speaker is trying to make the listener curious about something. If they succeed the listener will attempt to find in their memory something that they have experienced so the listener can respond to the speaker with a story of their own, satisfying the goals of each. How might we do that? (Modern AI doesn't even ask this kind of question oddly.)
- Matching underlying goals and plans is a kind of pattern matching but pattern matching in AI these days tends to be about words or pixels and not about ideas. It is hard for a computer to pattern match ideas, so when we talk about how computers can learn we must be very sceptical about the kinds of things they are matching. Bob was matching on “not getting what you want when it is easy enough to provide.” He had a goal and he couldn’t achieve it. He needed to find an explanation of why something we were both asking for wasn't given to us. Human understanders know what they are trying to understand. Computers not so much.
- Explanations are the basis of understanding. Bob was searching for an explanation. He constructed one by matching my story to his story. But what was he matching exactly? He was matching on the plans and goals held by the actors in the story and his own curiosity about what their points of view might have been. He unconsciously constructed an explanation: maybe the actors didn’t want to accede to the request because they thought that the request was too extreme.
- When we match our stories to the stories of others we do so in order to learn from them. When we think about a story we have heard, we do so in order to construct an explanation of the events in the story. We can only do this by finding experiences we have had that relate to the experience being told to us. We pursue this path if we are curious about an explanation (typically because we think that explanation will help us understand something we were curious about.)
- But what do we match on? Certainly not words or pixels. We match on high level abstractions like goals, plans, and intentionality. My goal was to eat the way I like. Bob’s goal was to look the way he wanted. But at a higher level of abstraction my goal was to get someone to do something for me and so was his. So any explanation would have had to have been about convincing other people to do what we wanted. That kind of goal (how to convince someone) was never actually discussed but that is what we were both curious about and such goals drive learning.
Learning starts with curiosity. We seek explanations and use those explanations until we get confused again. Marvin Minsky once told me he loved being confused. He liked to think about difficult stuff.
To put this another way: if you are not confused nor curious you will not learn. This applies to every form of education and every form of AI.
Teachers: confuse your students. Don't give them explanations. F=MA explains nothing your average student is curious about. They know already that the harder you hit a ball the further it goes.
Curriculum designers: start with what your students are confused about. As I have said many times, learning happens when students wants to learn, not when teachers want to teach
AI people: you will never make computers intelligent by focussing on words, no matter how well you can count them or match them. Everything starts with goals and the ideas that underlie them. Dogs have goals but they don't have words. Amazingly, dogs can think intelligently about getting what they want. When modern AI can do what dogs do every day in order to achieve the real goals that they have, please let me know.