From time to time people contact me wanting to know what I think about whether they should donate money to SIAI. My usual answer is something like, “I am not involved with what happens inside the organisation so I don’t have any inside knowledge, just what I, and presumably you, have read online. Based on this my feeling is that, in absolute terms, nobody seems to know how to deal with these issues. However, in relative terms, SIAI currently appears to be the best hope that we have.” In response to such a question the other day I ended up elaborating further about some of my thoughts on how safe AGI might be funded and the role that SIAI, or similar, might best play. The remainder of this post is an edited version of that email.
My guess is that it will play out like this: SIAI’s contribution will be to raise the level of awareness of the dangers of powerful AGI over the next decade or two. As AGI progresses their message will be taken more seriously. Then at some point powerful teams will start to race towards building the first real AGI. The degrees to which these groups will have been influenced by SIAI thinking will vary. Due to greed, wishful thinking, ignorance and what have you, in general safety will come second to progress. A short period of time later the post human period will begin. Where that goes will depend to some extent on fundamental properties of highly intelligent systems, and to some extent on these systems’ specific initial conditions. Given our limited understanding, this currently feels like a roll of the dice to me.
Although SIAI raising awareness is helpful, I see it as playing a supporting role rather than a central one. Some global problems require mass action and this can be achieved through mass awareness driving policy changes. Other problems, such as AGI development, will be driven by small focused groups of highly skilled people vying to be first. Working out which ideas and teams will win such a race is impossible: even experienced VCs mostly pick duds. The best one can do is to back a range of promising teams in the knowledge that only one needs to succeed in order to more than recoup the losses on those that failed.
The impression I get from the outside is that SIAI views AGI design and construction as so inherently dangerous that only a centrally coordinated design effort towards a provably correct system has any hope of producing something that is safe. My view is that betting on one horse, and a highly constrained horse at that, spells almost certain failure. A better approach would be to act as a parent organisation, a kind of AGI VC company, that backs a number of promising teams. Teams that fail to make progress get dropped and new teams with new ideas are picked up. General ideas of AGI safety are also developed in the background until such a time when one of the teams starts to make serious progress. At this time the focus would be to make the emerging AGI design as safe as possible.
I realise that the safety margins on such a system will most likely fall short of what some consider to be necessary. It’s not the “engineered for maximal safety from the outset” approach that I believe core SIAI people favour. I also agree that this fact will create significant risks. My point is that trying to build an (almost) ideally safe AGI also entails a great deal of risk, albeit for a different reason: you’re unlikely to be first or even close to first. If I had to roll the dice which bet would I prefer? To hope for an ideal AGI design to be finished on time, but most likely see some other random design take the honours and suffer the consequences? Or to bet on a number of AGI teams where if any of them makes serious progress there is an organisation behind them trying to work out how to make the emerging design as safe as possible? I can’t quantify these risks, but my gut feeling is to go for the second option.
There is also a funding angle to what I’m suggesting. Donating money feels a bit like a sure loss, even when the cause is a good one. Naturally the money has implications, but to you personally the money is just gone never to be seen again. Investing feels different: you have an ongoing stake in something, a little bit of ownership. If things go well you can make a big profit, and even if they don’t you can probably get at least some of your money back. As a result it is often easier to get somebody to invest $1000 in a project than it is to get them to donate $100. I suspect that there are many people who aren’t donating to SIAI, or aren’t donating much, who would however be interested in investing in safe AGI development. Given the difficulty of picking the winners, a managed portfolio of promising small AGI startup teams seems like the best approach to me, where the parent organisation has safe AGI as a core concern.