Funding safe AGI

From time to time people contact me wanting to know what I think about whether they should donate money to SIAI.  My usual answer is something like, “I am not involved with what happens inside the organisation so I don’t have any inside knowledge, just what I, and presumably you, have read online.  Based on this my feeling is that, in absolute terms, nobody seems to know how to deal with these issues.  However, in relative terms, SIAI currently appears to be the best hope that we have.”  In response to such a question the other day I ended up elaborating further about some of my thoughts on how safe AGI might be funded and the role that SIAI, or similar, might best play.  The remainder of this post is an edited version of that email.

My guess is that it will play out like this: SIAI’s contribution will be to raise the level of awareness of the dangers of powerful AGI over the next decade or two.  As AGI progresses their message will be taken more seriously. Then at some point powerful teams will start to race towards building the first real AGI.  The degrees to which these groups will have been influenced by SIAI thinking will vary.  Due to greed, wishful thinking, ignorance and what have you, in general safety will come second to progress.  A short period of time later the post human period will begin.  Where that goes will depend to some extent on fundamental properties of highly intelligent systems, and to some extent on these systems’ specific initial conditions.  Given our limited understanding, this currently feels like a roll of the dice to me.

Although SIAI raising awareness is helpful, I see it as playing a supporting role rather than a central one.  Some global problems require mass action and this can be achieved through mass awareness driving policy changes. Other problems, such as AGI development, will be driven by small focused groups of highly skilled people vying to be first.  Working out which ideas and teams will win such a race is impossible: even experienced VCs mostly pick duds.  The best one can do is to back a range of promising teams in the knowledge that only one needs to succeed in order to more than recoup the losses on those that failed.

The impression I get from the outside is that SIAI views AGI design and construction as so inherently dangerous that only a centrally coordinated design effort towards a provably correct system has any hope of producing something that is safe.  My view is that betting on one horse, and a highly constrained horse at that, spells almost certain failure.  A better approach would be to act as a parent organisation, a kind of AGI VC company, that backs a number of promising teams.  Teams that fail to make progress get dropped and new teams with new ideas are picked up.  General ideas of AGI safety are also developed in the background until such a time when one of the teams starts to make serious progress.  At this time the focus would be to make the emerging AGI design as safe as possible.

I realise that the safety margins on such a system will most likely fall short of what some consider to be necessary.  It’s not the “engineered for maximal safety from the outset” approach that I believe core SIAI people favour.  I also agree that this fact will create significant risks.  My point is that trying to build an (almost) ideally safe AGI also entails a great deal of risk, albeit for a different reason: you’re unlikely to be first or even close to first. If I had to roll the dice which bet would I prefer?  To hope for an ideal AGI design to be finished on time, but most likely see some other random design take the honours and suffer the consequences?  Or to bet on a number of AGI teams where if any of them makes serious progress there is an organisation behind them trying to work out how to make the emerging design as safe as possible?  I can’t quantify these risks, but my gut feeling is to go for the second option.

There is also a funding angle to what I’m suggesting.  Donating money feels a bit like a sure loss, even when the cause is a good one.  Naturally the money has implications, but to you personally the money is just gone never to be seen again.  Investing feels different: you have an ongoing stake in something, a little bit of ownership.  If things go well you can make a big profit, and even if they don’t you can probably get at least some of your money back.  As a result it is often easier to get somebody to invest $1000 in a project than it is to get them to donate $100.  I suspect that there are many people who aren’t donating to SIAI, or aren’t donating much, who would however be interested in investing in safe AGI development.  Given the difficulty of picking the winners, a managed portfolio of promising small AGI startup teams seems like the best approach to me, where the parent organisation has safe AGI as a core concern.

This entry was posted in Ideas and tagged , , . Bookmark the permalink.

15 Responses to Funding safe AGI

  1. Kevembuangga says:

    Sigh…
    Like all things “Singularitarian” the whole topic of AI friendliness is utterly idiotic.
    Friendliness cannot be ENFORCED upon any third party (of any kind, human, animal, robotic, ET aliens, whatever), friendliness only comes from shared values and peer recognition.
    As a demo please try to bring Talibans or jerks of the sort to friendliness.

    • jsalvati says:

      You seem to be somewhat confused (though less confused than many others). The whole idea of Friendly AGI is that “Friendliness cannot be ENFORCED upon any third party” and can only come from shared values, and therefore an important part of AI research must be to find out exactly what the human values are that we want the AI to have (this sounds much simpler than it is).

  2. Pingback: Accelerating Future » Shane Legg on Funding Safe AI

  3. michael vassar says:

    Strategically, I agree with you about SIAI’s main role being the promotion of Friendlyness among other AGI projects. I think that it may be realistic to expect that our case can be made clearly enough that most competently run AGI projects will eventually be as safety conscious as we think is necessary. The trend towards greater safety consciousness among younger AGI developers is very strong. With luck, intelligence enhancement will accelerate this trend still further.

    As a modestly successful entrepreneur (www.sirgroovy.com), I disagree with you about the superiority of an investment model for raising funds. Venture and angel funding, as opposed to FFFF (Friends, family and fools funding) is almost exclusively limited to projects that either grossly misrepresent science (e.g. blacklight technologies) or promise a high probability of real results in a reasonable time-frame. Behavioral economics and my business experiences both agree that humans are not well suited for distinguishing probabilities between .1% and 10%, while any worth-while AGI venture will have to be in the top end of this range. Ben Goertzel also has a very good record of raising funds, but this is rare in AGI. He also has a very good record of identifying AI talent that was later recognized elsewhere by those with much greater resources.

    The very best VCs don’t pick mostly duds. Hummer Winblad, for instance, gets a slight majority of its seed stage software start-ups to acquisition or IPO. Of course, they are very selective and only fund about 6 companies per year, as well as helping their companies a fair bit.

    Economics dictates that most VCs pick mostly duds, as if VC was in general extremely profitable that would draw money in until most VCs were either unskilled clods or over funded and had no way to effectively use most of their money.

    Kevembuangga is amusing. He calls SIAI idiotic and then immediately makes our chief assertion as if it’s his own idea.

    • Shane Legg says:

      Hi Michael,

      Regarding VCs, I haven’t dealt with them personally but in videos online where well known tech VC people talk about the business I’ve heard them say that they look for something like 20x returns because the clear majority of companies they invest in don’t make it through to profitability. I guess that picking the winning AGI team or teams more than a year in advance will be at least as hard and probably harder. Indeed, even if a safety orientated group such as SIAI supported 10 different promising AGI efforts, my feeling is that there would still be a significant chance of some other group getting there first. Once real AGI starts to look like a serious near term prospect to many people, there could even be thousands of groups trying from all over the world.

      As for the source of investment funds, I’m thinking that it would come more from relatively well off people who were interested and passionate about this kind of thing.

  4. asdf says:

    I think individual teams can and will be safe in their endeavors, probably indeed encouraged by organizations like SIAI. I remain worried about government sponsored, in-the-dark projects headed by political figures who don’t understand the risks properly rather than scientists who do understand the grave dangers here.

  5. michael vassar says:

    OK Shane, but I have a fair amount of experience with VCs, and repetition of that cliche is a meme, not an anticipation controlling belief. Typical VCs do badly (for reasons I explained grounded in economics) but they don’t anticipate any given company being likely to fail.

    • Shane Legg says:

      The guys I saw in this hour long panel discussion about VCs (several years ago… no idea what the link is now) were wealthy and owned successful VCs. They were not doing badly on average. Most of their early stage investments failed, but the successes had such massive returns that they made plenty of money. If this is a cliche meme, it seems odd to me that a group of VC owners would all be pushing this meme on a panel discussion.

  6. michael vassar says:

    Regard AGI, my guess is that if it is developed via brain emulation or very brain inspired appraoches there will be dozens of serious efforts, each very large and very expensive. This might also be the case with evolutionary approaches, though I doubt it. If AGI comes from some other approach I don’t expect it to come from many different sources. More like two or three groups of a dozen people. We should probably discuss our different models of scientific progress before or after the Summit.

    • Shane Legg says:

      I think the “gradient of progress” will play a big role here. If we go from very few people thinking powerful AGI is a near term possibility (i.e. like now) to having a working powerful AGI a couple of years later, then there will be very few teams. But if most people start to believe it’s a real near term possibility but it takes another 10 years before it happens with a number of increasingly impressive steps being taken along the way, then there could be a large number of groups working on this by the end.

  7. michael vassar says:

    Re: VCs, I suspect that you just don’t know smart non-nerds and thus assume smart people have the motivations that smart nerds would have for the same behaviors. It is a cliche meme. VERY cliche.

    As noted, we need to discuss history of science and technology.

    • Shane Legg says:

      You’ve lost me regarding smart people and nerds and motivations for behaviours. My comments on VCs are not based on anybody I know, just talks I’ve watched online where VCs themselves describe their business.

  8. Pingback: Bayesian Investor Blog » Blog Archive » Assorted Links

  9. Pingback: MIRI’s February 2014 Newsletter | Machine Intelligence Research Institute

Comments are closed.