Sutton on human level AI

Prof. Rich Sutton, probably the most famous person in the field of reinforcement learning, gave a talk today at the Gatsby Unit.  I was expecting a standard introduction to reinforcement learning to begin with, but it wasn’t to be.  Instead he kicked off with 20 minutes about the singularity.

Audience: So when do you expect human level AI?

Rich: Roughly 2030.

Whether or not you agree, views like this seem to be becoming more common in academia.

This entry was posted in Uncategorized. Bookmark the permalink.

20 Responses to Sutton on human level AI

  1. Carl Shulman says:

    For every person like Rich, how many say “late this century” or “not this century”?

    • toby says:

      Or the opposite: how many say earlier than 2030?

      • Shane Legg says:

        I haven’t done a systematic survey or anything… but based on my experience the only people talking about pre-2025 are people who have companies that claim to be building an AGI now — which is to say, very few people. I have not heard any academics say this to me as far as I can recall.

        • Will Newsome says:

          The Blue Brain and probably-upcoming Human Brain Project seem to be claiming something like human-level intelligence by 2025, as far as I can tell. What do AGI-minded neuroscientists generally think of those projects and timelines, or projects like them? How do such projects contribute to progress towards AGI? Aside from technical progress, is the publicity generated by such projects good or bad for the field (ignoring AI risks)?

          Relatedly, what is the common perception among AGI researchers of the societal/economic impacts of AGI once it’s developed? Kurzweil-style singularity, Hansonian economic transition, something else entirely, or do most AGI folk mostly not think about it?

          • Shane Legg says:

            That’s a lot of questions!

            Academic neuroscientists that I’ve ever spoken too, which is a fair number now, don’t think much of the Blue Brain project. They sometimes think it will be valuable in terms of collecting and cataloguing information about the neocortex, but they don’t think the project will manage to understand how the cortex works as there are too many unknowns in the model and even if, by chance, they got the model right it would be very hard to know that they had.

            Almost all neuroscientists seem to think that working brain models will not exist by 2025, or even 2035 for that matter. What ever the date is, most consider it too far away to bother to think much about.

            Such projects probably help to get more kids interested in the topic.

            Most AI/Neuroscience academics don’t think about AGI, and among those that do, most don’t think very hard about what the likely implications are. At least in my experience.

        • Tim Tyler says:

          >based on my experience the only people talking about pre-2025 are people who have companies that claim to be building an AGI now

          Eyeballing our respective graphs, we both put around 40% probability on some fairly spectacular braininess before 2025.

    • Shane Legg says:

      Very hard to say. For example, the people who think it’s a very long time off probably don’t bother talking about this. It could also be that people aren’t changing their minds, rather it’s more acceptable to talk about their beliefs now.

      All I can say is that from my experience the view that the singularity is near (i.e. under 50 years) seems to be becoming more publicly acceptable.

      • toby says:

        The people I’ve noticed who believe it’s a long way off are the 55-60 year old programmers who are mad because they weren’t smart enough to do AI.

        • Shane Legg says:

          I’ve also heard the opposite theory: that people tend to predict that it will occur towards the end of their own lifetime.

          • Michael Vassar says:

            That’s just not empirically true as far as I can tell. Vinge and Kurzweil, for instance, are of similar ages but give very different time estimates. Eliezer is much younger and gives time estimates intermediate between theirs.

  2. etrain says:

    any video/transcript. would be fascinated to see this one.

    saw a very similar talk given by Richard Granger (world leader in ANN development) in 2007 at Dartmouth – spent 20 minutes going over the state of the art in Computational Neuroscience to an informed audience. End of the lecture, he asked himself the question – “When will the artificial brain surpass the human brain?” To paraphrase his answer to a room of 21 year-olds – “Well within your lifetime.”

    • Shane Legg says:

      Interesting.

      No, it wasn’t recorded. Sejnowski is also pretty bullish about understanding the brain quite soon. These people are certainly a minority, but they do exist, their numbers appear to be growing, and some of them are very senior and well respected scientists.

  3. Sergiu says:

    What were Sutton’s arguments for this event happening in this timelines?

    When it comes to predicting such things, I think being a well respected scientist doesn’t help with the accuracy. It might even hinder it, as AI people wish for the human level AI to happen as soon as possible so that their life’s work will have a nice meaning, which introduces a positive bias.

    • Shane Legg says:

      His singularity arguments were pretty standard ones, indeed he was quoting Vinge, Moravec and on so.

      What I think is new is the growing social acceptance of the singularity idea in academic circles. It’s still pretty fringe, but it’s becoming less so. I think this is interesting purely as a social observation and it will certainly aid those wanting to purse singularity related work within the academic system if there are known and respected professors who are open to these ideas.

      • Sergiu says:

        Good point, thanks! It is probably as you mentioned above – people feeling safer to discuss their beliefs in public.
        I still find these talks a bit disturbing as the job of the people from academia doing science should be science and not predicting the future. And we have a bad antecedent about predicting such things not more than 50 years ago. I hope the AI community will not look foolish again in 30 years.

      • Toby says:

        Well–Ray Kurzweil was on Jimmy Kimmel *and* The Daily Show last week. That accounts for a lot of people being exposed to the Singularity idea, even if most aren’t academics.

  4. Alpha Omega says:

    If Mr. Sutton is correct, I guess you have about a 20 year window of opportunity during which you can take over the world. You’d better get cracking on building a human level AI before someone else does, because they’ll almost certainly be more evil than you!

  5. Aron says:

    I made a bet recently with a friend that 10% or more of the cars in NYC will be robotically driven by 2031. I’ll be curious to see how that plays out. But here’s my question: does the singularity come before robocars in NYC or after?

    • Kevembuangga says:

      LOL, if your bet is successful the Singularity comes after robocars in NYC , because… just because, NOTHING can be predicted after the Singularity!
      (by its very definition, do you remember/grok that?)

      • Aron says:

        I’m glad you threw that LOL in there because I don’t think anyone could give that definition with a straight face.

Comments are closed.