Goodbye 2010

Well, well, another year is drawing to a close. That means it’s once again time to review what has happened and where things are going.

It’s been a very eventful year for me, both personally and on the work front. I keep my personal life off this blog, and as for work… um, significant things are happening but I’m not ready to talk about them yet 🙂 Thus, I’ll just stick to my general predictions this time around.

First of all, my set of predictions for the teenies. We’re only 1 year in so it’s not surprising that I’m still pretty comfortable with the predictions I’ve made. The only tweak I’ll make is that over the last year I’ve become slightly more confident that we’ll have a decent understanding of how cortex works before the end of the decade. That’s my only update.

My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025. Over the last year computer power has increased as expected, and so it looks like we’re still on target to have supercomputers with 10^18 FLOPS around 2018. In terms of neuroscience and machine learning, I think things are progressing well, maybe a little faster than I’d expected. I was toying with the idea of moving the prediction very slightly closer, but decided to play it safe and keep the prediction unmoved at 2028. With many people thinking I’m too optimistic, showing restraint is perhaps wise 🙂 I can always move my prediction nearer in a year or two.

One thing I screwed up last year was the 90% credibility region. Going by a log-normal CDF for my predicted mean and mode that David McFadzean did (see bottom of this page) the upper end should be a bit higher at 2045, i.e. at a CDF of 0.95. It seems that I got the lower end right, however, as the CDF is about 0.05 at 2018. With 5% at each end, that gives the 90% interval.

This entry was posted in Uncategorized. Bookmark the permalink.

7 Responses to Goodbye 2010

  1. Sean Strange says:

    I have a question about your AGI prediction: how exactly did you arrive at it? You cite statistics, but in what way are they meaningful? It’s one thing to project a well-defined quantity like computing power into the future, but it seems to me that trying to project something as nebulous and poorly understood as “human level AGI” is not really even a scientific enterprise.

    • Shane Legg says:

      Well, I’ve been doing machine learning and AI research for about 17 years, and now study neuroscience. Each year I look at what I think we need (which changes as I learn more), how far along we are on these things, and then try to estimate how much further we have to go.

      What statistics are you referring to? I don’t really cite anything in this post other than current computer power. For that I look at the kinds of computational bottle necks that I think an AGI will face, at least based on how I think the first ones will likely work with a roughly brain like architecture, and then try to project how much computer power will be needed to develop systems at the required scale. By the way, it’s not really supercomputer power that matters, but the amount of power that a typical researcher has access to which is perhaps 4 orders of magnitude less. Nevertheless, the supercomputer power is the easiest statistic to track.

      As for whether “human level AGI” is meaningful. I agree that this is a problem, and indeed much of my research on the universal intelligence measure and algorithmic intelligence quotient have been about trying to make machine intelligence more precise and measurable. However, while my efforts may help for research purposes, I think that the metric that will count the most in the public eye will be peoples’ intuitive evaluation of the intelligence of an artificial agent: that moment when they interact with an AGI prototype and go “whhoaa, ok, so that’s actually fairly intelligent!” So one way to characterise my prediction would be that a significant proportion of the population will be saying this by the late 2020’s or early 2030’s.

      That might still seem too imprecise for you. However, I think that by the time we understand how to build an AGI to this level, I think we will have already solved most of the key problems and will be able to scale the machine up significantly. Clearly super human machine intelligence will then only be a few years away.

  2. Aron says:

    When does some nuance show up in this criteria about human-level intelligence? Why don’t we list and rank the order in which we expect certain capabilities to be surpassed.

    Beating Ken Jennings on Jeopardy!
    Writing an entertaining fiction.
    Developing a theory worthy of a Nobel.
    Driving a car autonomously coast-to-coast.
    Eradicating the ugly bags of mostly water.
    etc.

    • Shane Legg says:

      A really good list of these things would be a good idea, in my opinion. They would need to be fairly precisely defined to avoid people bending them over time to fit their predictions.

  3. Kevembuangga says:

    Might a well argued criticism of AGI change your predictions a bit?

    • Shane Legg says:

      No.

      The post you link to has no bearing on my predictions.

      AIXI is to real AGI as Turing machines are to real computers. The point of these theoretical constructions is to act as succinct mathematical models that can be theoretically studied and investigated. They aren’t supposed to be designs for how to build effective and efficient system. As a lot of people who don’t come from a math background get confused about this point, I made it the topic of my AGI 2010 conference talk.

      So yeah, I wouldn’t try to build an AGI by computing the incomputable just as I wouldn’t try to build the next generation of supercomputers by using a Turing machine with an infinitely long paper tape.

  4. I’d recommend to Sean and others like him the excellent paper by Shane and Marcus Hutter: http://www.vetta.org/documents/UniversalIntelligence.pdf

    And to check out the course program of the AGI course in my blog to get aware of the existence of pretty well understood views on intelligence.

    As of Sean’s comment – in my opinion it’s the opposite – AGI, the “human-level intelligence” is pretty well defined – see what your baby (or you) can do at given age, having given comparable sensory modality available.

    That is – comparing human capabilities of certain “level” with capabilities of an AGI agent, at best with comparable sensory inputs.

    There’s a robot project running, called IMCLeVER, they have nice architecture towards this goal: http://artificial-mind.blogspot.com/2011/03/im-clever-and-icub-eu-projects.html

    However, yes – for *narrow* AI, there is a problem – what’s the general IQ of a chess program, statistical machine translation program etc.? The answer is: it is not human-level intelligent at all, not comparable…

    Regarding predictions for AGI arrival – I’m more optimistic. 🙂 I think now we know the direction and what have to be achieved, it’s not like 70-ies or 80-ies, when only guys like Ray Solomonoff were aware. And calculations of brain computing power is a bit fuzzy, brain is not computer and might be not the most efficient thinking machine anyway, so a computer may happen to cope with less raw horse power.

    To me now is a time comparable to middle 30-ies – early 40-ies for computers. Very few were aware how to do it and built the theoretical and practical foundations, there was no communication between them, and a few years later – an explosion.

    So I believe/wish it could be sooner than 2028 or 2025…

Comments are closed.