Some of you might know about the Lighthill report from 1973 which was deeply critical of progress in AI. This report was the main factor behind cutting the funding of AI research in the UK, and seems to have contributed to the more global cuts around this time known as the “AI winter”. Via Yee Whye Teh I recently came across a BBC debate between James Lighthill and three supporters of AI research: Richard Gregory, John McCarthy and Donald Michie. You can download the televised debate from here, though be warned that it’s 160MB.
Now, 36 years later, it’s interesting to think about how the speakers’ various views and predictions have played out. Overall, the analysis by Lighthill felt the most coherent to me, and I’d say that what has since happened largely backs him up, though it can be argued that he helped to cause this outcome. I agree that he slowed AI down a lot, but 36 years is a rather long time and in the types of problems that he was focusing on there hasn’t been much progress. In response the other debaters mostly just pointed to small advances that had occurred and indicated that they felt that more advances were on the way. Lighthill then denied that these advances showed any real progress towards intelligence.
This feels a lot like today: sceptics say that AI has made no progress, optimists point to lots of advances, and sceptics then say that these advances are not what they consider to be real intelligence. I think this points to perhaps the most fundamental problem in the field: if you can’t define intelligence, how do you judge whether progress is being made? It’s as true today as it was then, and it’s why I think that trying to define intelligence is so important. I like the fact that they keep on saying that an intelligent machine should be able to perform well in a “wide range of situations”, because, of course, this is very much the view of intelligence that I have taken.
No transcript, I assume?
Not that I know of.
I just watched the video. Interesting, especially as one of the people in the video (Rod Burstall) was a colleague from my time at Edinburgh last year. I feel connected to the history 😀
In general, it seems that Lighthill ended up running a “No true scotsman” argument: AI never produces any results, and when presented with a concrete result, he says “ah, well, that’s advanced automation, not AI”.
McCarthy was clearly the superior intellect in that conversation. Donald Mackie came across as a bright, but not quite at McCarthy’s level. This is interesting, because I spent a lot of last year reading vintage McCarthy papers, and noticing that the combined insights of McCarthy and Judea Pearl seemed to define a standard of deep understanding that one rarely sees.
Also interesting, as one looks back upon this debate, is how 3 bright intellects and 1 super-bright intellect wasted their time engaging in sophistry, arguing over definitions, etc, instead of actually using their intellects together to make progress on the problem. They could have concentrated on the implicit technical question that was left hanging for the entire debate: if the human brain does it, but our programs can’t, then what are we missing? Between them, they might even have been able to make some progress in going beyond McCarthy’s logic based approach.
I think they already knew what the problem was/is: it’s the combinatorial explosion that Lighthill kept going on about. Somehow the brain can deal with vast input and output spaces that are richly structured. When algorithms have to make decisions in such spaces they choke. In my view the brain deals with this by using huge amounts of computation, long learning times and a clever algorithm that builds deep hierarchical networks that reduce these large spaces down to manageable proportions.
Right, but McCarthy et al were stuck in the land of logic-based AI; they didn’t know about machine learning.
Realizing that a ML/probabilistic paradigm is necessary for AI is a significant step.
I don’t believe it’s a good FAI path — too much is essentially on heuristics, and conceptual foundations are even more dodgy.
Watched this too and I didn’t see anything “exponential” in the progress AI made in MORE THAN 35 YEARS, and this is not for lack of computing power which progress actually has been exponential (roughly), the computers being about 20000 times larger and faster than in 1973.
We are still far, far away of our own monkey cleverness, some missing ingredient may be? 😀
exponential progress in computing power isn’t supposed to lead to exponential progress in perceived AI ability.
It is supposed to lead to a step function in perceived AI ability, with the step occurring when AI exceeds human ability,
i came across this view a few years ago , what do people thing ?
http://www.lehigh.edu/~mhb0/pubspage.html
Why Children Don’t have to Solve the Frame Problems
http://www.lehigh.edu/~mhb0/childrenframe.html
seem one of the more intersting.
Difficult to say whats this worth at first glance but at least he seem concerned with problems of representation which I think are definitely the stumblestone of AI rather than sequence prediction. 🙂
YouTube version: http://www.youtube.com/watch?v=yReDbeY7ZMU