Another great Singularity Summit. I liked the focus on neuroscience this time. I think it will be a major driving force behind AGI over the next 20 years. The talk by Demis Hassabis is the one to look for in this area, once they become available online. My own talk was well received — I had applause during the talk as I put up results, something that I’ve certainly never experienced before. Due to a manic schedule of meetings, deadlines and last minute results, I unfortunately didn’t get to spend much time socialising this year. Hopefully things will be a bit more sane next time around and I’ll be able to catch up with everybody properly. Looking forward to it already.
I don’t know if anybody has thought of a theme for next year’s conference yet, but I’d like to make a suggestion: ethics and AGI safety. The conference has been around for a few years now and had attracted some fairly big names and serious academics. How about a return to the core mission of SIAI? As I think AGI is approaching, we seriously need much deeper and broader thinking on these topics. One other suggestion: while big names draw the crowds, in my opinion they often give the least interesting talks. How about a couple of the most popular and accessible LessWrong posts get selected and their authors present them as Summit talks?
Looking at this graph I am very tempted to suggest a Singularity Summit topic: Decelerating Future. The serious question is: why, if the singularity is near, it is so poorly felt by one of the most aggregate indicators, namely the world’s GDP?
Interesting graph. Thanks 🙂
I’m not sure that we would see rapidly rising GDP a decade or more before a singularity. I suspect that a lot of global GDP on decade long time scales comes down to basic resources like oil. Going from a brick-like cellphone that can only make calls to a super fancy new smart phone is certainly very cool, but does it increase GDP much? I suspect not.
I hope the ethics will cover both the dangers and the opportunities of AGI. Lots of popular movies have focused on the dangers. But AGI could also be a major boost for our collective intelligence, not only because an AGI could participate to this collective mind, but also because AGI models like AIXI could be used to mathematically state what is the best scientific method. This method (or a simplified version of it) could be taught to everyone in school, thus greatly enhancing our scientific level and our collective intelligence. I’m sad that many kids never really learn about the scientific method in their science courses. This should be the first thing that a science teacher should teach. I think the lack of a widely recognized mathematical version of the scientific method explains this. Scientists still don’t agree about what exactly is this method and about the importance of Occam’s razor. It’s even a problem when scientists try to predict things that will happen in the future, some of them don’t seem to care about the size of their models as if Occam’s razor never existed.
BTW, do you think my “Occam Razor” intelligence test has similarities with your AIQ ? you can play with it on my “razorcam” website.
While a lot of movies have focused on the dangers, they tend to do so in weird and highly unrealistic ways. Nevertheless, they also do tend to portray technology in a excessively negative way, as you point out.
I agree on the importance of teaching the scientific method in school. Not just as a dry fact to be memorised, but as a living idea that can be powerfully applied to all sorts of problems, both real scientific research problems and more personal questions.
I had a play with your game. If there’s a pattern there it certainly wasn’t obvious to me! 😮
I agree with you about the scientific method, and I think that a strong AI using Occam’s razor would be a major evidence that the razor is a big part of this method, so that everyone could finally learn about it in school.
The hidden “environment program” of my game is randomly generated at the beginning of each game so I can’t help you with the one you played ( but I can guarantee you that a rather small deterministic program was in charge of your environment).
You can tweak the options to get a smaller randomly generated program, thus probably easier to predict. This is still a prototype.
The point was that I thought that my game may look a bit like a graphical human interface to an AIQ test.
Several environments of different complexities could be generated and then an “Occam’s razor weighted” average score could be calculated at the end.
I changed some of the parameters and could now see a simple pattern. Working out how to set the squares at the bottom to improve my sore was still very difficult. My score was 6.
Yes, it certainly seems to be in the same category of tests at AIQ. I have something similar coded, but without the graphical interface. Let me know when you come out with new versions of your test and I’ll come back and play again.
I’ll take another shot at the ethics:
The AI with the best AIQ can be seen as the best algorithm to predict and optimize an environment. It can be seen as the best scientific method, if we use an algorithmic definition of this method.
So, besides the risks of an uncontrollable AGI, the ethics do include these important questions: should we restrict the use of the best scientific method ? Who will be restricted ? Who will not be ? Is it ethical to prevent some people from using the best method ? Would it slow our collective intelligence growth and thus diminish our ability to prevent other threats to our survival ?
As for the uncontrollable AGI, what is the probability that it could be smarter than our collective intelligence (including our use of computers and under control AI programs) ? My guess is “very low”.
I think it’s more than just scientific method. For example, even if you fully understand the scientific method you can’t necessarily to prove a difficult mathematical result or play a grand master level game of chess.
An AGI might not exceed the collective intelligence of humanity in all dimensions, but it may exceed us in some ways that are important and this could enable it to become very powerful.
An AGI might not exceed the collective intelligence of humanity in all dimensions…
Dimensions? Which dimensions?
The dimensions of the search space?
The dimensions of the proof strategies over the search space(s)?
I think nobody is working on such meta or meta-meta problems.
The comments on a recent post on Dick Lipton blog veered (as usual…) toward both extremes of “metaphysical” musings and “hard facts” and even plain denial of intuition whereas this is one of “mysterious” human capabilities to be explained.
The dimensions of agent performance over the space of problems.
A safe prediction since GOFAI experts systems already does for some problems… 🙂
Unrelated, there seem to be a quirk in the RSS, your previous comment was repeated in the flux, may be you did an edit?
Yes, I do know that. Try reading more carefully what I wrote.
Yes, I moved a comment into a thread.
Shane on video: http://vimeo.com/17553536 / http://vimeo.com/17702682
Again about AGI safety: when an AGI can modify its own running program in order to always get a maximum reward no matter what happens in its environment, it probably becomes a lot less intelligent and thus a lot less dangerous.
We already know that humans who are always on narcotic drugs, which is a way to artificially maximize their reward, are not smart enough to enslave us.
So an easy way to avoid a “too powerful” AGI is to make sure that it will be easy for a “too powerful” AGI to modify its own running program. For example by automatically giving it access to its own running program as soon it gets a “too high” score on an automatic AIQ test.
This is quite an old idea: basically that a powerful AGI will automatically neutralise by wire-heading. Some argue that this won’t happen because the AGI will try to protect itself so as to stay wire-headed indefinitely, and that will require it taking control of its surrounding as much as possible. Some argue that it won’t wire-head in the first place because that would be contrary to its original goal.
In my opinion, these issues are too slippery to be dealt with using informal arguments. Until somebody manages to formalise the problem and prove something, I think these questions will remain open.
I agree. Though a real AGI will not be as perfect as AIXI, let’s use the only math formalism of the actions that an AGI will perform: AIXI.
The only goal of AIXI is to maximize its reward.
It is an easy theorem that in some AIXI environments, modifiying its own running program will be the only way to maximize its reward. Because we can easily make AIXI environments that will never write a maximum reward on their output tape. QED.
But it will be more difficult to formally prove that in a given environment it will also happen. And even more difficult might be to formally prove what an imperfect AGI will do.
What do you mean by AIXI modifying its running program? AIXI is not computable, so it doesn’t run a program in any normal sense.
I want a formal proof, like the kind you see in Hutter’s book. Until somebody can provide such a proof I don’t think we’ll be getting much closer to a conclusion.
Sorry I meant AIXItl, which is in Hutter’s book and computable.
The sketch of the proof is to put AIXItl in a simple environment that never outputs a maximal reward, for example one that always outputs a minimal reward.
Writing a formal proof of this conjecture about AIXItl and a simple environment seems a waste of time because what people want is a proof that the AGI that you will actually sell them is harmless in our universe, and it is far more difficult.
But I see this conjecture as a hint for how to write a “not too powerful” AGI.