Share and discuss this blog

Sunday, May 15, 2016

should we fear AI or just fear the people who write about AI?

Ashok Goel, a professor at Georgia Tech, made the news this week with the revelation that one of the TA’s that he used in his AI course was actually an ”AI.”  Now, I have no reason to believe that Goel was trying to do something wrongheaded. I think he was just playing around. But the media love AI stories these days and have yet again led the public on a very wrong headed journey about what AI is and what it can be.

Could a chatbot be an effective TA? It could certainly beat a rather disinterested TA. There are many real TA’s who are not really very good or very interested in what they are being made to do. But that is not an important question. The right question is whether we could make a real AI program that could perform this function as well as a very good person does? The news media want us to believe we are on our way to that. I have a different point of view. It is not that we can’t do it. It is that we aren’t actually trying to do that yet. To explain this, I need to provide some history.

The first of what are now called chatbots was called ELIZA and was written by Joseph Weizenbaum. This is well known. What I will now tell you is not so well known. Weizenbaum was a friend of mine. Ken Colby was a friend of mine as well. I know some things about all this that others may not know.

Colby and Weizenbaum were friends, at least they started out as friends. Colby was a psychiatrist who became interested in whether computers could function as therapists. I was hired by Colby (in 1968) to help him at the point where he was building PARRY. PARRY was supposed to be a paranoid patient. Colby was interested in showing that he could fool therapists who would not know if they were talking with a real person or a computer. PARRY supposedly lived in a locked ward of a hospital and it was Colby’s idea to build an effective model of paranoia and prove it was effective by fooling therapists. In essence, he was trying to pass the Turing test with PARRY.

He and Weizenbaum talked about these issues for years and at some point had a falling out. Weizenbaum wanted to show up Colby by building program that could fool people at MIT into thinking it was a Rogerian therapist. He chose Rogerian  therapists, in part, to make fun of Colby, but also because Rogerians don’t usually say that much in response to what their patients tell them.

An example of ELIZA that I found on the web is:

How do you do. Please state your problem.
>> I spend too much time with computers.
Why do you mention computers ?
>> You are one.
Does it please you to believe I am one ?
>> Yes.
You seem to be quite positive.
>> I suppose I am.
How long have you been ?


Weizenbaum found that people wound up pouring their hearts out to ELIZA and he was very upset by this. He wrote a book attacking AI, called Computer Power and Human Reason, to explain why he thought AI would never work. The irony is, of course, that Goel’s program did no more than what ELIZA did in the 60’s (possibly even less), but it is now worthy of articles in the Wall St Journal and the Washington Post. Key word analysis that enables responses previously written by people to be found and printed out, is not AI. Weizenbaum didn’t think he was building a Rogerian therapist (or doing AI) really. He was having some fun. Colby was trying to model a paranoid because he was interested in whether he could do it. He did not think he was building a real (AI) paranoid. And, I assume Goel does not think he is building a real AI TA. But the press thinks that, and the general public will soon think that, if it keeps publishing articles about things like this.

This technology is over 50 years old folks. Google uses key words, as does Facebook, as does every chatbot. There is nothing new going on. But we all laughed at ELIZA. Now this same stuff is being taken seriously.

What is the real problem? People do behave in any way that is anything remotely like what these “AI’s” do. If you tell me about a personal problem you have, I do not respond by finding a sentence in my memory that matches something you said and then saying that without knowing what it means. I think about your problem. I think about whether I have any reasonable advice to give you. Or, I ask you more questions in order to better advise you. None of this depends upon key word and canned sentences. When I do speak, I create a sentence that is very likely a sentence I have never uttered before. I am having new ideas and expressing them to you. You say your views back to me, and a conversation begins. What we are doing is exchanging thoughts, hypotheses and solutions. We are not doing key word matching.

It may be that you can make a computer that seems paranoid. Colby had a theory of paranoia which revolved around “flare” concepts like mafia, or gambling, or horses. (See his book Artificial Paranoia.) He was trying to understand both psychiatry and paranoia using an AI modeling perspective.

The artificial TA is not an attempt to understand TA’s, I assume. But, let’s think about the idea that we might actually like to build an AI TA. What would we have to do in order to build one? We would first want to see what good teachers do when presented with problem students are having. The Georgia Tech program apparently was focused on answering student questions about due dates or assignments. That probably is what TA’s actually do which makes the AI TA question a very uninteresting question. Of course, a TA can be simulated if the TA’s job is basically robotic in the first place.

But, what about creating a real AI mentor? How would we build such a thing? We would first need to study what kinds of help students seek. Then, we would have to understand how to conduct a conversation. This is not unlike the therapeutic conversation where we try to find out what the student’s actual problem was. What was the student failing to understand? When we try to help the student we would have to have a model of how effective our help was being. Does the student seem to understand something that he or she didn't get a minute ago?   A real mentor would be thinking about a better way to express his advice. More simply? More technically? A real mentor would be trying to understand if simply telling answers to the student made the best sense or whether a more Socratic dialogue made better sense. And a real TA (who cared) would be able to conduct that Socratic dialogue and improve over time. Any good AI TA would not be trying to fake a Rogerian dialogue but would be thinking how to figure out what the student was trying to learn and thinking about better ways to explain or to counsel the student.

Is this possible? Sure. We stopped working on this kind of  thing because of the AI winter than followed from the exaggerated claims being made about what expert systems could do in1984. 

We are in danger of AI disappearing again from overblown publicity about simplistic programs.

To put this all in better perspective, I want to examine a little of what Weizenbaum was writing in 1976:

He attacked me (but started off nicely anyhow):


Roger C. Schank, an exceptionally brilliant young representative of the modern school, bases his theory on the central idea that every natural language utterance is a manifestation, an encoding of an underlying conceptual structure. Understanding an utterance means encoding it into one’s own conceptual structure.

So far so good, he said nice things and represented me accurately. But then….

Schank does not believe that an individual’s entire base of conceptions can be explicitly extricated from him. He believes only that there exists such a belief structure within each of us and that if it could be explicated, it could in principle be respond by his formalism….

There are two questions that must ultimately be confronted. First, are the conceptual bases that under linguistic understanding entirely formalizable even in principle as Schank suggests and as most workers in AI believe? Second, are there ideas that, as I suggested, “no machines will ever understand because they relate to objectives that are inappropriate for machines?” ……

It may be possible, following Schank’s procedures to construct a conceptual structure that corresponds to the meaning of the sentence, “will you come to dinner with me this evening?” But it is hard to see — and I know this is not an impossibility argument, how Schank-like schemes could possibly understand that same sentence to mean a shy young mans’ desperate longing for love. 

I quoted parts of what Weizenbaum had to say because these were the kinds of questions people were thinking about in 1976 in AI. Weizenbaum eventually became anti-AI, but I always like his “dinner” question. It is very right-headed and it is the least we can ask of any AI-based TA or mentor. Can we build a program that understands what the student is feeling and what the student’s real needs are, so that we can give good advice? Good teachers do that. Why should future online teaching be worse than what good teaching is like today without computers or AI?

Do we actually have to do all this in order to build AI?

Could we simply build an automated TA/ mentor that did not do all that but still performed well enough to be useful?

These are important questions. Maybe Goel’s program did perform well enough to consider using it in MOOCs where there are thousands of students. I am not fundamentally interested in that question however.

Here is what I am interested in. Can we stop causing people to so misunderstand AI that every ELIZA-like program makes headlines and causes people to believe that the problems we were discussing in the 70’s have been solved?

The fundamental AI problems have not been solved because the money to work on them dried up in mid 80s. There are businesses and venture capitalists today who think they are investing in AI but really they are investing in something else.  They are investing in superficial programs that really are ELIZA on steroids. Would it be too much to ask people to think about what people do when they engage in a  conversation and build computer programs that could function  as an effective model of human behavior? I hope we can get people with money to start investing in the real AI problem again. Until we do, I will be finding myself on the side of Weizenbaum when when he was being critical of his user’s  reactions to ELIZA (for good reason.) We should start working on real AI or stop saying that we that are. There is nothing to be afraid of about AI, since hardly anyone is really working on it any more. Most “AI people” are just playing around with ELIZA again. It is sad really.

Weizenbaum and Colby were brilliant men. They were both asking fundamental questions about the nature of mind and the nature of what we can and cannot replicate on a computer. These are important questions. But, today, with IBM promoting something that is not that much more than ELIZA people are believing every word of it. We are in a situation where machine learning is not about learning at all, but about massive matching capabilities to produce canned responses.  The real questions are the same as ever. What does it mean to have a mind? How does intelligent behavior work? What is involved in constructing an answer to a question? What is involved in comprehending a sentence? How does human memory work? How can we produce a memory on a computer that changes what it thinks with every interaction and gets reminded of something it wants to think more about? How can we get a  computer to do what I am doing now — thinking, wondering, remembering, and composing?


Those are AI questions, not questions. They are not questions about how we can fool people.

Monday, May 9, 2016

Boredom spurs creativity; are computers or mobile phone owners ever bored?

Boredom matters. We need it. But, two sets of supposedly thinking entities are never bored: “smart” (deep learning) computers, and people who are attached to their phones (which is beginning to look like nearly everybody.)

A friend’s teenage son (who was coming over for some advice) rang my doorbell the other day. In the time it took me to open the door, he was already looking at his phone. When I am on the elevator in my New York apartment I notice that literally everyone is look at their phones during the ride. Sherry Turkel has pointed out that this behavior is killing conversation and she is right. But it is also killing something even more important: creativity.

Creativity depends upon many things but a key one is boredom. When you are bored your mind wanders. You do this weird thing called “thinking.”

I have begun thinking more about AI in recent months because of the incessant nonsense being written about what computers can or might do. So. let me ask a simple question. Is Watson ever bored? Do these “deep learning” machines get bored? It seems obvious that they don’t. Why not? Because, in order to be bored you have to have something you like doing, a goal you are pursuing, a problem you are interested in, or wondering about and are in some way prevented from solving. 

Wittgenstein said that all creative thinking took place  in the “three B’s”: bed, bath and bus. What he meant was, that that was the only time time there was no one else talking or distracting him and with nothing much to do, his mind wandered and interesting thoughts occurred.

When could this possibly happen in the life of young people who cannot stop looking at their phones? What is there to be bored with or bored about? If you are bored with a facebook post you just go to the next one. If you are bored with whats on TV each change the channel. If you have nothing to do you surf the web. No one sits quietly and thinks any more.

I find this very scary for two reasons. Our educational system is in such bad shape in party because we don’t allow boredom which means we really do not encourage creativity. There are answers to be memorize, books to be read, and test to be  taken. We aren’t actually expected have original thoughts in high school ever. (Unless a kid happens to have a really good teacher and more freedom than is typically allowed. 

Now computers. The very idea that AI is progressing is patently absurd. What would it mean to have a smart computer that didn’t on occasion have an original idea about something? How could a computer be smart if it didn't worry about things from time to time. Americans are busy worrying about a Trump-Clinton election. We talk about it. We wonder about it. That worrying looks like thinking. What computer would worry about this? How could a computer possibly worry about this? Does Watson worry?

Now, of course that is the real AI question and the kind I used to work on when AI was funded by people who thought AI was something other than “deep learning.” I asked myself  and my students how we could get a computer to have creative thoughts. One answer is that a computer would have be trying to figure things out in some way, be considering hypotheses about whatever it is trying to explain, then imagining alternative explanations, and then trying to invent one’s own. This is what creativity looks like.

Could a computer do that? Of course it could in principle, but it wouldn’t be the so-called AI machines we have now which are very good at counting and matching and searching., That kind of AI depends on being annoyed by a state of affairs and thinking you should be able to come up with some better answer and then putting yourself in the equivalent of a bathtub or a bed or any place where it is quiet and there are no distractions so you can let your mind wander.

Computers will not become creative (or bored) any time soon unless those who fund AI change their perspective.


What really bothers me is that people won’t be creative  either. Young people’s first thought these days is to post a picture of what themselves or what they are are looking at, rather than to think about the world around them.

Tuesday, May 3, 2016

AI is nowhere near working; let's think about what people can do that AI can't

When I started working in AI in the 1960’s, it wasn’t really one field, just a set of people trying to get computers to do some interesting things that we knew people were capable of doing. These days, unfortunately, AI seems to mean “deep learning” whatever that is, and stuff IBM talks about that uses the word “cognitive.”  

I have recently been thinking about some of the aspects of AI that I did not work on. (I worked on natural language processing, memory, and learning.)  I think there are things worth discussing about the other areas of AI that might shed light on what is really going in today’s so-called AI.

Let’s start with Face Recognition. It is clear that face recognition technology is pretty good. Facebook can tell when your picture has been posted by someone and can add your name to it. I am sure there are all kinds of surveillance technologies that make use of face recognition as well.

But, there is an aspect of face recognition that people naturally do, but computers cannot come close to doing today. I don’t mean to be political here, but my best example is recognizing Ted Cruz’s face. I can recognize him, but every time I see a man who is angry, mean, and just a little Satanic looking. Commentators say these things all the time, and I am not trying to comment on that; rather I am trying ask the question: what is it that we see when we look at someone and immediately distrust them and are slightly afraid of them?

To put this another way, when you are walking down the street and someone scary walks by, what is it that you see in his face that makes him seem scary?   I  have been running some experiments about how people react to talking head videos that we’ve captured of experts telling stories about their expertise. Every time someone looks at one of our videos they have an instant reaction to the human qualities of the person as well as to the story the person is telling. They like or dislike people in about ten seconds. What is it that they are seeing?

This is an interesting topic, but my point is about AI. AI is no where ready, no matter how well it does at face recognition, to tell us,  “I find this guy scary” or “distrustful" or “he seems to be lying”, even though people can do this all the time without conscious thought.

What does this tell us? It tells us that AI has a long way to go before it can do stuff that nearly any human can easily do.   Actually, any dog can do this. They too have instant reactions to a person. What are they seeing? This is the AI question. My guess is that Facebook even with its 100’s of AI people is not working on this problem and moreover, it doesn’t care. But it is a very important aspect of cognition. (Sorry, IBM, you don’t actually own that word.) Facebook is only working on the “you can count the pixels and pattern match” part of face recognition. When we feel attracted to someone, or we want to avoid them, we are using our innate ability to do a more subtle kind of face recognition.

I have this same problem with speech synthesis and speech recognition. I was riding in my wife’s car the other day and the navigation system  she was using told her to turn on “puggah” boulevard. We were in an area we know and both laughed out loud. The street is called PGA Blvd, which is the acronym for the Professional Golf Association. The program never heard about acronyms I guess. Later it told us to get on the ramp for W Palm Beach. Now a reader would think I am abbreviating west with the W, but I am not. The navigation system actually said “W.” My reaction was that this device is really stupid. How hard would it be to make an intelligent navigator with respect to speech synthesis? Well, apparently too hard for the company that made this one. (It would also be too hard for it to tell me about a new restaurant that I was passing and might want to check out, which is the kind of AI that I am interested in.) That is AI too, but it is not “deep learning,” so no one is funding it.

Which leads me to what I really wanted to talk about here, speech recognition. Someone said to me the other day that AI has made real strides in speech recognition. I laughed. Now, I realize people talk to Siri and other devices. And sometimes Siri “knows” what you are saying in the sense that it can find a response. As a way of pointing the real AI problem out to my friend, the next thing I said was:   “szeretlek nudunuca”. To which he responded “huh?” I said it again. He said he didn’t understand. I asked if he could tell me one word that I said. He said “no.” I said “can you even report a part of what I said?’ He said “no.” It all sounded like gibberish to him. Of course it did. I was talking Hungarian. When someone speaks an unfamiliar language you cannot hear where the word breaks are, and you cannot even decipher the sounds. This is because human speech recognition involves having heard everything before and understanding the context in which the spoken words said belong.  

It is difficult for people to understand a sentence that is out of context.  What is a normal response to a completely unexpected sentence? People generally have to be listening for something in order to understand it. Understanding involves guessing about what someone is likely to say. Those guesses are made on the basis our knowledge of each other and of the possible things we are or might talking about. To do that right in AI, we need to determine intentions and motivations and we need to have a model of the person we are talking to, including what they know and what their interests are.

The other day someone who I play softball with (who often asks me questions) asked me “what is Zion?” I had to ask him to explain what he was actually asking about. I heard the words, but had no idea what he was trying to find out.  After a bunch of sentences from him I got what the question was about. I didn't have any trouble with the words, but I had absolutely no idea what he wanted to learn from me. Siri and the others would not be able to have that conversation with him because there is no AI there. Apple, Google and the rest don’t care about that. It is the pretense of AI that seems to interest them.


We are very far from a computer that can do the things I’ve just discussed. AI will be hyped as much as IBM’s marketers and others choose to do it in an effort to make money. As for me, I would prefer that they actually worked on AI instead of trying to convince everyone that AI is already here.

Thursday, April 28, 2016

Could IBM stop lying about Watson already? I guess not

IBM needs to stop lying. It is getting hard to take. Today alone I saw two outrageous lies about how Watson will save us all.

Here is the first article:


Its headline is:

Big Data: Will We Soon No Longer Need Data Scientists?

As you can guess, the answer is we won’t because Watson.

IBM, for example, believes that it can offer a solution to the skills shortage in big data by cutting out the data scientists entirely and replacing (or supplementing) them with its Watson natural language analytics platform.


I want to keep this simple, so I will say what I was doing today. I didn’t sleep well last night because of a phenomenon called alcohol rebound. I only had 2 drinks, but I had them 2 hours before bedtime  and this caused a rebound at 2 am which kept me up for hours. This has only started to happen to me in the last year or two, so I Googled “alcohol rebound in old people”  and found a long list of articles none of which were any help. I could ask my doctor but I am guessing he hasn’t memorized the literature and doesn’t know the data. But Watson can do it right? Watson wouldn't even understand my question much less my needs and it would not be able to extrapolate from data that might or might not be there. To put this another way, Google can’t answer most of the questions I pose to it and Watson is no better.  Natural language processing is not very good yet, no matter what all the “AI” deep learning people say. Intelligent people are always better to talk to than any AI system we currently envision.

These days we have large life insurance company as a client for one of our data analytics courses. So I imagined a  question they might ask Watson. “What is is the worst policy we could write?” They might ask that. Would Watson even know what “worst” meant in this context? Would it understand all the parameters relevant to determining an answer? I assume this company’s data scientists could answer this while it is safe to assume that Watson wouldn’t even know what the question meant. But this doesn’t stop them from advertising more nonsense about Watson. I had had enough.

And then I saw this:


The headline is

“C” is for cognitive learning
IBM and Sesame Street collaborate to create the next generation of tailored learning tools. This new technology venture combines Sesame Street’s expertise in education and storytelling with IBM Watson technologies.



And what piece of brilliance will Watson bring to education?Apparently they are just hopping on the personalized learning bandwagon, which means we will teach the stuff we are making you learn by tutoring you to get better test scores when we see what answers you got wrong. So, Watson will change learning, or maybe not so much. Watson will help kids who can’t read well by seeing what words they have trouble with and helping the kids practice. I have news for IBM. People can already do that. Good parents and teachers always do that. Is IBM’s view of education that all kids will have everything they do analyzed and then shoved at them again in another form because Watson is good at analyzing data? 

The problem in education is simple enough folks. It is boring. It is irrelevant to the interests and needs of most kids. They don’t need to learn classical Greek, or ancient history. They should be encouraged to learn what they want to learn. Could we do something radical and ask kids what they to learn how to do and then them help then learn to do it? We could, but then if we submitted the answer to Watson it wouldn’t understand a thing the kids responded. (What would it do with “I want to be a fireman?”)


(As an aside, people who read me regularly know that I am a terrible typist. Apple’s Pages does automatic spell correct and is very bad at doing that. But today it corrected my misspelling of Watson to Satan on two different occasions. Maybe Apple’s AI is smarter than Watson’s.

Thursday, April 21, 2016

Thank You Indiana for reminding me why the government has no idea what it is doing in education: Knowledge of AI now a requirement for Indiana 8th graders

I was a a professor of Computer Science for 35 years. But, I didn’t learn enough about the subject apparently. I would not be able to pass the new Indiana State standards in computer science for eighth grade.

Here they are: 

Screen Shot 2016-04-15 at 2.19.33 PM


I will now attempt to deal with these questions (which I assume will be in the form of a multiple choice test that signifies nothing other than memorization.)   I will assume, for now, that Indiana really wants answers, so here I go:

6-8 CD1: (demonstrate an understanding of the relationship between hardware and software)

Hardware is the box (phone, iPad, Macbook). Software is the stuff you type and the things you click on. 

Is that the answer Indiana? If it is, then all you are doing is teaching the names of something kids already know about. If it isn’t and you want some more complex answer, you will be out of luck, and are engaging in a pointless exercise.

6.8 CD2: (identify routine hardware and software problems that occur daily)

Sorry but I don’t know what this question is asking. Are they trying to teach that sometimes you need to re-boot your machine? Otherwise I have no answer.

6.8 CD3: (describe major component of computer systems and networks)

Sorry Indiana, I can’t answer that question. Why not? Because I have no idea what it is about. Is “router” one of the answers? How about “printer”? I haven’t a clue. But I am sure, Indiana, that you can make kids memorize a list of terms and then announce great results about Indiana kids and computer science.

6.8 CD4. (how is machine intelligence different than human intelligence)

This is, of course my favorite question. AI was has been my field since the mid 60s. (For all I know. I might be one the 5 oldest people in AI at this point.) And, Indiana, I cannot answer it. Why not?

Describe what distinguishes human from machines: 

A machine is what I am using to type this. I cannot type on people. I used a machine to make toast this morning. No human I know can make toast. I drove from the airport to my home yesterday. I used this machine called a car. Even it was an AI car it would not confuse me. I know it isn’t human.

The difference between how machines and humans communicate:

Humans talk to each other. Sometime they type to each other. Some computers say stuff to you such as “can’t find file”  or "a new update is available." But they don’t fool me. The machine is not saying this actually. It is displaying something a human wrote when the software (or was it the hardware?) was made that I am using. The machine is not talking to me even if it used a human voice to do this. I am not delusional. Apparently Indiana is.

Siri, chatbots, Watson, and every other so called AI is doing the same thing: giving voices to something a human wrote, or, in extreme cases giving voices to something some software found and making believe that it is talking to you and giving you an answer. This is not machine intelligence. It is a game that various companies are playing to make you think these machine are intelligent. Is that the right answer Indiana?

Describe how computers use models of intelligent behavior?

At least this question isn’t asinine. I have been working on it for more than 50 years. It is an important question. I am willing to believe that there is not a single person in the entire state of Indiana who knows the answer. Wait. I remembered that one my students is a professor at Indiana University. He knows what the answer is: "we haven’t really figured it out yet." Guess they didn’t ask his advice.

Good job Indiana. You have made school even stupider than it already is.







Sunday, April 17, 2016

Former slaves studying Latin and Greek; nothing has changed

I am in the middle of reading a book called “The Black Calhouns,” written by Gail Buckley. It is a story of one African American family starting in the times of slavery and going to the present. I was not reading this book because of my interest in education, but, as often happens to me, I became infuriated by something I read that related to education.

The book says that in October 1870, the Georgia State Legislature provided money to educate “Negroes” at schools set up for this purpose, but there was “widespread belief that this would not work.” So, they held examinations, “overseen by a board from the old slaveholding class.” A previous Georgia governor said: “I know these Negroes. Some of these pupils were my slaves. I know that they can acquire the rudiments of an education, but they cannot go beyond. The are an inferior race, and for that reason, we had a right to hold them as slaves, and I mean to attend these examinations to prove that we are right.”

After the examinations, the Atlanta Constitution wrote: “we are not prepared to believe what we witnessed:  To see colored boys and girls fourteen and eighteen years of age, reading Greek and Latin, and demonstrating correctly problems in Algebra and Geometry, and seemingly understanding what they demonstrated appears almost wonderful.”

I was taken aback by this since I wasn’t really thinking about the idea that what upsets me most about education has been going on that long. They were teaching the newly freed slaves to read Latin and Greek and to do Algebra. Why?

If you asked me to design a curriculum for these children it would have had two main principles. First, it would have offered choices. I have never understood why every child must learn the same stuff. Second, the choices would relate to the real possibilities of the future lives these children faced. Were these kids going to become scholars in the Classics? Were they going to ever use algebra for any reason?  I would have taught them how to open a business, how to run their own farm, how to fight for their political and economic rights, how to think critically about life decisions they might actually have to make, how to become articulate, how to get along.

I hadn’t realized that today’s silliness was going on in those days as well. Today, for example, in New York City, there is a charter school that seems to be everywhere with lots of funding, called Success Academy. When you look at their website the faces of kids that they show are almost all non-white. The curriculum that they offer might as well be the one offered in 1870 to the former slave children. It is the same nonsense.

What was going on then, and what is going on now, is the attempt to prove that these kids can go to Harvard and become scholars and Supreme Court Justices. I am sure that some of them can. But how many? One percent of them? Not that many even.

We have held the collective insane belief, and now I realize that this belief has been around a long time, that the way we help poor children to live better lives is to treat them as if they were very wealthy children who may not actually ever have to work and for whom the world is wide open to them.

Poor children should be treated the same as rich children. Sounds good. Sounds democratic. A lovely ideal. Because we want to believe this, we have closed up vocational schools and made education all about preparing for college.

Let me remind the people who do this, that going to college is just as likely to leave a student in massive debt and with no ability to work because he was convinced to become a literature major.

Even in 1870 we were preparing children to be scholars. Why were they learning Latin and Greek? The answer was that all the “important books” were written in Latin and Greek, but that was never the real answer. Even in 1870 there were books written in English. And, although we don’t make every child learn Latin and Greek any more, we do still make every child algebra. (And, I might add that my daughter was made to learn Latin, so this still goes on.)

The time has come to get over this nonsense. We can offer hundreds of choices and let kids decide how they want to proceed. The argument against this has always been “but if we don’t expose them to Chemistry, how will they know if they like it?” How many chemists are there? Must we expose every kid to every scholarly field? All it does is create trouble. I was “exposed” to mathematics for sixteen years in school. I liked it. But it was a complete waste of time. When I learned what mathematicians actually did all day, I realized that this profession made no sense for me. But I was never taught that and so I kept studying it because I liked it.

It is time to let kids know what job options exist for them and help them make good choices while also teaching to think hard, make life decisions, learn to speak and write effectively, and generally learn how the world around them works.

I have no information on this, but I am pretty sure that the former slave kids did not go on to be scholars. Neither will any more than 1% of the graduates of Success Academy. There really aren’t that many jobs for scholars.

It is time to become realistic about what we teach in school. We can offer a scholar track too but people need to know what scholars do all day and how may jobs there are for scholars. We simply have to stop being stupid about education.



Wednesday, April 13, 2016

A history test from AP; test your ability to stay awake



Below is an article from Sunday’s New York Times Education Supplement. I am simply posting it here. My point is simple enough. Why do we have tests like this? Whose interests do they serve? Who remembers what is “taught” by them?  And, how do they possibly relate to how a student will do in college? (Actually that last one I can answer: college is full of tests like this as well, at least bad college course are.)  No wonder students are bored to death in school and can't remember what they "learned."

U.S. History, Revised
Roundly drubbed as left-wing anti-Americanism, the framework for the Advanced Placement course in United States history was recast for 2015-16. Here are some of the practice questions that were revised to address issues.  


pastedGraphic.png
Refer to these quotes when answering questions 1 to 3.
1The statements of both Truman and Reagan share the same goal of ...
restraining communist military power and ideological influence.
creating alliances with recently decolonized nations.
re-establishing the principle of isolationism.
avoiding a military confrontation with the Soviet Union.
2Truman issued the doctrine primarily to ...
support decolonization in Asia and Africa.
support U.S. allies in Latin America.
protect U.S. interests in the Middle East.
bolster non-communist nations, particularly in Europe.
3Reagan’s speech best reflects which of the following developments in U.S. foreign policy?
Caution resulting from earlier setbacks in international affairs.
Assertions of U.S. opposition to communism.
The expansion of peacekeeping efforts.
The pursuit of free trade worldwide.
pastedGraphic_1.png
Adolph Treidler/Collection of Library of Congress
Refer to this image when answering questions 4 to 6.
4The poster was intended to ...
persuade women to enlist in the military.
promote the ideals of republican motherhood.
advocate for the elimination of sex discrimination in employment.
convince women that they had an essential role in the war effort.
5The poster most directly reflects the ...
wartime mobilization of U.S. society.
emergence of the United States as a leading world power.
expanded access to consumer goods during wartime.
wartime repression of civil liberties.
6Which of the following represents a later example of the change highlighted in the poster?
Feminist challenges to sexual norms in the 1970s.
The growing protests against U.S. military engagements abroad in the 1970s.
The increasing inability of the manufacturing sector to create jobs for women in the 1970s and 1980s.
The growing popular consensus about appropriate women’s roles in the 1980s and 1990s.
pastedGraphic_2.png
Jacob A. Riis/Bettmann, via Corbis
Refer to this image when answering questions 7 to 9.
7Conditions like those shown in the image at right contributed most directly to which of the following?
The passage of laws restricting immigration to the United States.
An increase in Progressive reform activity.
A decline in efforts to Americanize immigrants.
The weakening of labor unions such as the American Federation of Labor.
8The conditions shown in the image depict which of the following trends in the late 19th century?
The growing gap between rich and poor.
The rise of the settlement house and Populist movements.
Increased corruption in urban politics.
The migration of African-Americans to the North in the late 19th century.
9Advocates for individuals such as those shown in the image would have most likely agreed with which of the following perspectives?
The Supreme Court’s decision in Plessy v. Ferguson was justified.
Capitalism, free of government regulation, would improve social conditions.
Both wealth and poverty are the products of natural selection.
Government should act to eliminate the worst abuses of industrial society.
pastedGraphic_3.png
Refer to this quote when answering questions 10 to 12.
10Which of the following aspects of Muir’s description expresses a major change in Americans’ views of the natural environment?
The idea that wilderness areas are worthy subjects for artistic works.
The idea that wilderness areas serve as evidence of divine creation.
The idea that government should preserve wilderness areas in a natural state.
The idea that mountainous scenery is more picturesque and beautiful than flat terrain.
11 Muir’s ideas are most directly a reaction to the ...
increasing usage and exploitation of western landscapes.
increase in urban populations, including immigrant workers attracted by a growing industrial economy.
westward migration of groups seeking religious refuge.
opening of a new frontier in recently annexed territory.
12Muir’s position regarding wilderness was most strongly supported by which of the following?
Members of the Populist movement.
Urban political bosses.
American Indians living on reservations.

Preservationists concerned about overuse of natural resources.