I am getting tired of talking about this, but there was yet another piece of stupidity published the other day.
As advancements in technology continue at an ever-increasing pace, will there ever come a day when we’ll be able to use science to cheat death? Australian startup company Humai seems to think so; it claims to be working on a way to transfer a person’s consciousness into an artificial body after they’ve died.
“We want to bring you back to life after you die,” says Humai CEO Josh Bocanegra on the company’s website. “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human. Using cloning technology, we will restore the brain as it matures.”
Really? OK, I am not even going to comment on this nonsense. This column is about key words. I have had enough with claims about AI based on key word analysis, so I thought I would explain it once again, in a way that anyone outside of AI could understand.
Consider this: What does the proverb : a pig with two masters will soon starve mean?” While you are pondering that, I will mention two more proverbs to think about:
A stitch in time saves nine
You can lead a horse to water but you can’t make him drink
Understanding how we understand these proverbs will make clear why key word analysis isn’t going to lead to robot consciousness or discoveries in cancer or new Bob Dylan songs any time soon.
I have learned, (because I ask people about these in job interviews actually), that many adults have no idea what these proverbs mean and can’t explain them at all. One reason is that they may never have heard them before, but that is the key word analysis answer. “I never heard it, so I can’t look it up and say what I found.”
In actuality, anyone who thinks hard can figure out what these mean. No computer can do that. But, remember that I am an AI person. I would like computers to be able to do this too, so I have thought quite a bit about it. Let me make clear what a person has to do in order to decipher the meaning of these proverbs. As I do this, think about how hard this would be for a computer to do.
Let’s start with the pig. English language proverbs are quite often said in farming metaphors (sailing is big too.) The first question is: why would a pig with two masters soon starve? It is a good question. Suppose it were a question on Jeopardy. Watson would lose. A smart person would win. Why? Because people who think don’t match key words. (They don’t ask themselves: where can I find a text where pig and starve are on the same page or how often are these words correlated?) What they do ask themselves is how having two masters would affect the pig. They also ask themselves other things, because sentences like this occur in actual contexts usually:
Why is this guy talking to me about pigs? We weren’t discussing pigs.
(What is the guy who said this trying to say? We were taking about my life situation and now he is talking pigs. He must be making an analogy.)
Why would the pig starve? Well, who feeds the pig normally? Aha. Either of the masters might feed the pig. Well, what if each one thought the other was doing the feeding? Now, I get it. He is not talking about pigs at all. I have two bosses. He was telling me that neither may think they need to look out for me.
This is not rocket science. It is in fact, everyday human thinking. But such thinking is way out of bounds for what AI can do today. Tomorrow maybe. But that tomorrow would require that the computer would be able to have a conversation where one person’s goals were being discussed, where another person was giving advice that the other might follow, and where that the advice was being said metaphorically using a well known proverb. This is what thinking looks like. It is not what key word analysis looks like.
To understand the advice given one must have a goal and ask oneself questions about how what was said relates to that goal and then figure out the answer. Could computers do that? I hope some day they can. Waston? Not so much. How about the above mentioned conscious robot company? Give me a break. We barely know what consciousness is, although I am pretty sure it has something to do with the stuff I just put in italics above.
I will let my readers figure out the stitch in time proverb themselves. Also I challenge the brilliant “AI” people at IBM give Watson a shot at it. Please let me know how it did.
Let’s move on to the drinking horse. Why can’t you make him drink? Isn’t he thirsty? But, of course this proverb isn’t about horses. It is about education typically. It means that you can teach people but they don’t necessarily learn. Let your key word analyzer figure that out. How do I know that this is what that proverb is about? Because life is full of situations in which we try to help somebody and they refuse the help that is being offered. They don't agree, or they don’t care (or they aren’t thirsty.) You need to figure this out if have never heard the proverb before. So unless our key word analyzer has a key proverb analyzer too, the key word analyzer would be baffled by this. And if we did list the underlying meaning of every proverb in the English language, the program still wouldn't understand it, because the proverb is about goals and plans and decisions we make, and about how to learn to think differently. This is exactly what we are not yet able to do in AI, much as I would like for us to be able to do that. The AI winter that started in 1984 killed all the work on that kind of AI. That is the consequence of making ridiculous claims about what AI can do.
I will end on a joke I like: You can lead a horse to water but a pencil must be led. Watson: why is that funny? Let me know when Watson or our conscious computer has figured out the answer to that.