Goodbye 2011, hello 2012

I’ve decided to once again leave my prediction for when human level AGI will arrive unchanged.  That is, I give it a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war.  I’d also like to add to this prediction that I expect to see an impressive proto-AGI within the next 8 years.  By this I mean a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed.  It will also be able to solve a range of simple problems, including novel ones.

This entry was posted in Uncategorized. Bookmark the permalink.

15 Responses to Goodbye 2011, hello 2012

  1. Carl Shulman says:

    Could you specify the “basic” capabilities a little to help distinguish them from the current state-of-the-art and make the predictions more testable?

    • Shane Legg says:

      It’s not so much that this system will be able to do something specific that current specialised systems cannot. For example, such a system will likely be unable to play table tennis, even though people have already built computers that can do this to some degree.

      The important difference is that it will be able to learn to do many things, rather than needing people to program it to be able to do each task. In other words, it’s not so much about what it can do, but about how it gets there. The resulting system would be much more integrated, adaptable and extendable as a result. As such a list of specific capabilities would not only be very difficult, it would also somewhat miss the point.

      The acid test of this agent I’m predicting is: if a smart person with a background in AI had the workings of the system explained to him over an hour or two, and then got to see it in action and interact with it, they would be highly likely to conclude that full human level AGI wasn’t more than a decade away.

    • Shane Legg says:

      Let me add a small meta comment: this might be my last annual prediction.

      One reason is that it’s kind of a lose-lose proposition. If my predictions are wrong then I have a cost now from people who think I’m crazy, plus more of the same in the future when I’m wrong. If my predictions are right, then I still have the same cost now, and in the future I doubt anybody will care much to give me much credit. Most likely the ardent skeptics will have somewhat forgotten just how skeptical they were. The main upside, I think, is that it might encourage people like yourself working on safety to consider whether they should devote more time to relatively near term practical approaches to AGI safety. The impression I get is that in the last 5 years this has happened to some extent. Hopefully my predictions have helped this along at least a tiny bit.

      There is also a second reason why I’m most likely going to stop: I think the time has come now not for predicting and debating, but for doing. Predictions and arguments are useful food for thought, but in the end only two things actually matter: formal mathematical proofs (not merely the mathematically phrased arguments I keep on seeing), and thorougher empirical demonstrations. Once the assumptions in the proof or the range of testing conditions are properly understood, there isn’t much left to argue about.

      • gwern says:

        As Hanson says, in practice making predictions does not pay the bills since people aren’t really interested in the truth or accurate forecasts. If it makes you feel better, if you are right, I at least will be impressed (

        • Shane Legg says:

          One complication with this prediction is that it might be impossible to falsify. For example, if a major military managed to build such a system, and thus it become clear to them that human level AGI and beyond wasn’t far off, there would be a strong incentive for them to keep this knowledge secret.

  2. Kevembuangga says:

    Hmmm… Yeah… Maybe…
    There are already some partial successes but I am not too fond of this kind of piecemeal approaches, because this is not bootstrapping our knowledge.
    We will end up with unmanageable “Frankeistein” AI if we don’t master the big picture of the how and why of intelligence and just throw together gimmicks that “work”.
    Singularitarians are right on this point about unfriendly AI and this is not the way to go.

    • Shane Legg says:

      I think it’s unlikely that there is one clean super algorithm to rule them all. And if there is such an algorithm, I suspect it won’t be the first solution to AGI that we find, rather an AGI will find this.

      What I think we will end up with is an integrated system that consists of multiple components that work together in a complementary way. I think this might be best explained by analogy. Image that you didn’t really know how to make a car. One approach might be to think as follows: Ok, so we need to be able to drive in straight lines so we need a driving in a straight line module, plus we need to be able to go around corners so we need a module that handles that case too, and then there are hills so we will need a subsystem designed for going up and down hills… and so on. This approach isn’t going to work for car building, and it won’t work for AGI either.

      An alternative is how cars actually work: you need pistons, and a carburettor, and a fuel tank, and seats, and wheels, and gears and a clutch… and so on. It’s still a complex system with many parts that have functionally specific roles, but these roles are typically complementary and integral to the overall system. I think that this is how the brain is designed, for the most part, rather than having specific modules for, say, language.

  3. sp says:

    > I expect to see an impressive proto-AGI within the next 8 years. By this I mean a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed.

    Could you elaborate on which developments in AI research make you think that a proto-AGI may appear so soon? Is it some new ideas (refs to published work or just names of researches would be helpful)? Or is it some old ideas + progress in CPU speed? Basically, what is the critical difference between today and 8 years ago?


    • Shane Legg says:

      Many things, but in three areas:

      1) The simplest and least interesting is computer performance. Seeing the same again over the next 8 years probably isn’t necessary, but it will clearly make it more likely that an impressive proto-AGI will be developed.

      2) Machine learning methods that I expect to be relevant to making a proto-AGI have made a lot of progress in the last 8 years, especially the last 3 years.

      3) Neuroscience is making a lot of progress. Some important parts of the brain’s design are now fairly well understood, and I think we’re making reasonable progress on a number of other important elements of the brain’s design. This is providing us with many AGI design hints.

      • sp says:

        Thanks for the answer. Computer performance, machine learning, neuroscience advances, it all makes sense. But could you be a little more specific about the machine learning methods? It seems you imply significance of recent progress. If it is not a secret, could you name some of these new methods? Thanks again!

        • Shane Legg says:

          As I’m currently working with collaborators in many of these areas, I prefer to talk about it after the research is published.

        • In the last 6 years, big progress was done in unsupervised feature learning. You can see some successes in Andrew Ng’s talk.

          In the future, we will see progress in: sequence prediction, integration with reinforcement learning, scaling to scenes with multiple objects. And new problems will be discovered on the way.

          • sp says:

            Ivo, thanks for the link. Yes, unsupervised learning is essential for AGI, and some progress is happening all the time in various branches of machine learning. But I would not take the results presented in the talk as an indication of the breakthrough leading to a proto-AGI in 8 years. The ideas of sparse coding / independent component analysis date back to 90s. And a lot is still missing. For example, how to deal with invariances (geometric, photometric etc) is poorly understood, especially in the unsupervised learning framework.

            From what Shane said (since he did not name a single ML approach), I assume that his forecast is (mostly) based on recent progress within the group of his collaborators. I look forward to seeing those results, when they get published.

  4. Really interesting website, please post any possible amendments to the date now that google has turned their attention to this area in ernst.

Comments are closed.