Thursday, May 23, 2013

From: Artificial Intelligence Meets Natural Stupidity

A short time before he died, someone asked Joseph Weizenbaum what the most important paper in his field was.  He thought a while and said it was  Drew McDermott's  “Artificial Intelligence Meets Natural Stupidity”   While I'm sure there are some anachronisms in this 1976 paper, it has a lot that seems to have only become more true in the intervening decades.

The rather simple and simply traced self deception  came from  the artificial intelligence specialists' use of natural language terms.   Those denatured terms were short cuts instead of more accurate denotations of what they were actually doing.  That has has, I'd guess, become an inbred habit of belief as new AI guys are generated by their elders.   And it's at least as bad among their true believers in the lay public today, as my current opponent  in a brawl over mind-reading machines shows.  They don't even understand similar problems of thinking when they're laid out to them.   They seem unable to understand that we don't know what it means to experience, never mind the mind that is both experienced and what experiences.   And that seems to come, at least in part, from their faith that the words they use to talk about them have real and known meanings defined according the the very limited way in which they are used in that narrow context.  Though they certainly couldn't say what those meanings are in the real world.  I certainly couldn't, no one seems to be able to.

So we can fool ourselves into believing that what the computer does, and our understanding of our own creations can stand in as actual understanding of some of the hardest of mysteries of our very being.

I left out a passage of more worthwhile examples, partly because of running out of time, partly because it's kind of archaic (note the former meaning of GPS) and, finally, because, since I'm giving a link I found THIS MORNING AFTER I TYPED IT OUT FROM A  DIM PHOTOCOPY OF A YELLOWING PAPER COPY LAST NIGHT,  you can read it yourself.  I had typed out the passage about behaviorism before then.  It is a habit that has, certainly, been retained in the succeeding sects of behavior-sci with their even more obvious mixing of self-deceptive labeling with entirely imaginary coding, giving the false impression that what they're doing is even closer to what is called TRUTH out of the same mere naming convention, not an actual identification of an observed natural entity.

I hope you like McDermott's dry wit.

As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery.  Many critics , have urged that we are over the border.  We have been very defensive toward this charge, drawing ourselves up with dignity when it is made and folding the cloak of Science about us.  On the other hand, in private, we have been justifiably proud of our willingness to explore weird ideas, because pursuing them is the only way to make progress. 

Unfortunately, the necessity for speculation has combined with the culture of the hacker in computer science to cripple our self-discipline.  In a young field, self-discipline is not necessarily a virtue, but we are not gettng any younger.  In the past few years, our tolerance of sloppy thinking has led us to repeat many mistakes over and over.  If we are to retain any credibility, this should stop.

This paper is an effort to ridicule some of these mistakes.  Almost everyone I know should find himself the target at some point or other;  if you don't, you are encouraged to write up your own favorite fault.  The sthree described here I suffer from myself.  I hope self-ridicule will be a complete catharsis, but I doubt it.  Bad tendencies can be very deep-rooted.  Remember, though, if we can't criticize ourselves, someone else will save us the trouble. 

Acknoledgement-- I thank the AI Lab Playroom crowd for constructive play.

Wishful Mnemonics

A major source of simple-mindedness in AI programs is the use of menmonics like “UNDERSTAND” or “GOAL” to refer to programs and data structures.  This practice has been inherited from more  traditional programming applications, in which it is liberating and enlightening to be able to refer to program structures by their purposes.  Indeed, part of the thrust of the structured programming movement is to program entiely in terms of purposes at one level before implementing them by the most convenient of (presumably many) alternative lower-level constructs.

However, in AI, our programs to a great degree are problems rather than solutions.  If a researcher tries to write an “understanding” program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the first implementation.  If he calls the main loop of the program “UNDERSTANDING” he is (until proven innocent) merey begging the question.  He may mislead a lot of people, most prominently himself, and enrage a lot of others. 

What he should do instead is refer to this main lop as “GOO34”, and see if he can convince himself or anyone else that GOO34 implements some part of understanding.  Or he could give it a name that reveals its intrinsic properties, like NODE-NET-INTERSECTION-FINDER, it being the substance of his theory that finding intersections in networks or nodes constitutes understanding.  If Quillian <1969> had called his program the “Teachable Language Node Net Intersection Finder”, he would have saved us some reading ( Except for those of us fanatic about finding the part on teachability.)

Many instructive examples of wishful mnemonics by AI researchers come to mind once you see the point.  Remember GPS?  By now, “GPS” is a colorless term denoting a particularly stupid program to solve puzzles.  But it originally meant “General Problem Solver”, which caused everybody a lot of needless excitement and distraction.  It should have been called LFGNS – “Local-Feature-Guided Network Searcher”. 

Compare the mnemonics in Planner with those in Conniver :

Planner                         Conniver
GOAL                           FETCH & TRY-NEXT
CONSEQUENT              IF-NEEDED
ANTECEDENT              IF-ADDED
THEOREM                    METHOD
ASSERT                        ADD

It is so much harder to write programs using the terms on the right!  When you say (GOAL...), you can just feel the enormous power at your fingertips.  It is, of course, an illusion.

Of course, Conniver has some glaring wishful primitives, too.  Calling “multiple data bases”  CONTEXTS was dumb.  It implies that, say, sentence understanding in context is really easy in this system.... 

… Lest this all seem merely amusing, meditate on the fate of those who have tampered with words before.  The behaviorists ruined words like “behavior”, “response”, and especially, “learning”.  They now play happily in a dream world, infernally consistent but lost to science.  And think on this:  If “mechanical translation” had been called “word-by-word text manipulation”, the people doing it might still be getting government money.  

I don't know if that cessation of funding continued but, somehow, I doubt it.

Note:  Earlier this morning I posted a draft instead of the near-final copy earlier.  I hope this version clears up the meaning of what I wrote.  It's been a hectic day.   Why the lists that line up perfectly in the word processor show up unaligned on screen, I can't figure out.

2 comments:

  1. As usual, I'm not reading to the end carefully, just pausing as I plunge along to make a note in the virtual margins:

    They seem unable to understand that we don't know what it means to experience, never mind the mind that is both experienced and what experiences. And that seems to come, at least in part, from their faith that the words they use to talk about them have real and known meanings defined according the the very limited way in which they are used in that narrow context. Though they certainly couldn't say what those meanings are in the real world. I certainly couldn't, no one seems to be able to.

    The word for this is "Magic." Think about it. In "Harry Potter," magic is simply an exertion of will over objects, an exertion that is always perfect (when done right) and always effective. None of that "many a slip/cup/lip" stuff. None of those "deals wit the devil" where the details always reveal an outcome you didn't want but didn't specify against.

    Engineering deals with this problem by basically saying "This is what you can do, and this is how you can do it, and will is not even a part of the equation. It's all about design and function." Engineering, of course, has its limits, and it never sees a ghost in the machine, be that machine the human form, or be that machine a computer that "thinks."

    "Mind" is a concept, like "beauty" or even "excellence." It means many things and no one thing. To even imagine "Mind" is something material that can, through sufficient application of will, be replicated or even transferred from housing to housing, is to engage in what is fundamentally magical thinking.

    ReplyDelete
  2. When it comes to thinking about our mind, or, worse, "the mind" and we try to be too concrete or scientific, it's like we're people in an oil painting trying to see the stretchers beneath the fabric of the canvas. Or maybe we're like people trying to guess what those painted people would be thinking.

    I like your identification of this with magical thinking because it really is true. I'd be interested in seeing you develop the idea.

    The recent story that takes quantum entanglement, not only outside of the realm of locality but also out of temporal limits is pretty shocking, especially as they say there is solid experimental evidence for both ideas. Having been brought up to have the highest regard for theoretical science - the typical snobbery that experimental science was, somehow, vulgar - I've become a lot more skeptical of it.

    ReplyDelete