One of the things I’ve been thinking about recently is the prediction of the future. Many people really enjoy doing this and come up with all sorts of wild speculations. It’s kind of like having the liberty to write your own science fiction, but then taking it a step further by convincing yourself to actually believe it. Sooner or later the future arrives, and many of the recorded predictions look rather silly. More cautions people take note of this and often avoid easily falsifiable predictions. That’s all very well as it avoids them ending up looking like a fool, however it also makes becoming a better predictor problematic as they’re never really forced to contemplate their mistakes. My preference is to make an honest attempt at specific predictions, along with the reasoning behind them. Then when the time comes, go back over them and try to work out what went right, what went wrong, and mostly importantly why. Was it bad luck? Was I overconfident? Under confident? Was some kind of systematic bias at work?
One example of this has been trying to predict the medium term direction of the stock market over the last 15 years. The evidence so far shows that I’m consistently good at predicting what will happen, but that I predict that it will happen much sooner than it actually does; I roughly need to double my time estimates. I’m now trying to mentally correct for this bias in the trades I make, but it will take some years to see if this is working.
In technological matters, I’ve generally done quite well. My picks for Sun, java, digital music, linux, MySQL and open source in general were pretty much on the mark. I thought machine learning use in industry would be bigger than it is today, but I wasn’t too far off. My biggest mistake was to badly underestimate how much Microsoft’s revenues would grow over the last 10 years — I thought they’d already almost saturated the market and its ability to pay. Like stock markets, my most consistent error has been to be to predict that things will happen faster than they actually do. I typically need to add about 50% to the time required.
As I don’t have a lot of my own technological predictions to look at, and some remain in the future, I’ve recently been looking at predictions made by others. I found a few of my old computer and science magazines from the early 80’s through to the late 90’s which contained predictions, and I also dug up Kurzweil’s “The age of spiritual machines” written in 1999 in which he has a whole chapter about 2009. There were a lot of hits and misses, but if I stand back and try to see the big picture, a pattern becomes clear: Predictions about basic hardware performance, even one I saw in a magazine from 25 years ago, are amazingly accurate. But you probably knew that already. Predictions about what would be technologically possible to do at a given point in time were not as accurate, but were still pretty good. Where things really started to go wrong was when they tried to predict not what would be possible, but what the majority of people would actually be doing.
Perhaps some examples would best explain this. State of the art speech recognition systems, such as some of the systems that were being developed at IDSIA when I was there, work impressively well. However, once you’ve learnt to touch type it is typically easier, quieter, more convenient (especially when editing or coding) and far more private to use a keyboard. I don’t care how good speech recognition is, I don’t want to sit in a room full of people talking out loud to their computers all day. I only know one person who routinely uses speech recognition to input text. The fact that speech recognition is technologically doable, doesn’t translate into it being practically useful for many everyday situations.
There are plenty of predictions that fail in this way: the prediction that everybody now would be making video calls on their cellphones. It’s certainly technologically possible, I saw a guy with a phone that could do it two years ago, but almost nobody does it. Or that most long distance air travel would be in supersonic jets. Again, technologically possible, has been for a long time, but not done in practice. Or that all mice would be wireless by now. Technologically possible, has been for years, but as far as I can tell most new mice still have cords. Or that most people driving long distance on freeways would get their car to automatically drive itself. I’m sure that’s technologically possible, but I don’t see anybody doing it. Or that your computer would log you in based on recognising your face or your voice. Technologically possible today, but not done in practice. And so on.
In short: Predicting raw performance is surprising accurate. Predicting what will be possible using the knowledge and technology of some future date can also be done with moderate success. Predicting what the population will routinely do, however, is much harder. The latter is largely decided by habit, cost and convenience. Simply being possible isn’t enough. Note that predicting the development of the first powerful AGI is of the second type.