Today is my birthday and as such I’ve decided to demand presents! Lots of presents! Gimme gimme gimme!
Now, I don’t have much use for material things, so they don’t interest me. (Ok, so a life long stipend to allow me to work on what I think really needs to be done would be useful…) Besides, from a purely practical perspective, I don’t want more heavy things to have to cart with me to London next year. No, I want something more valuable and enlightening (and light). I want your wisdom.
Here’s the deal: Give me a suggestion, a pearl of wisdom, or just something you think I really should consider doing, or changing, or what ever. It can be as specific or general as you like. It could be something I’ve never heard before, or something you’ve told me a million times but I always seem to ignore. But just one thing! My part of the deal is that over the next two months while I’m in New Zealand I will do my best to put my prior beliefs aside and seriously consider each of your gifts. That’s the deal.
I’ve already sent a similar request to a few friends and have received some gifts, but I thought I’d also throw it out to a wider audiance.
a wonderful story about EgoGoogling:
http://ming.tv/flemming2.php/__show_article/_a000010-001142.htm
HappyBirthday! 😉
Happy Birthday!
Although we haven’t met, your request for gifts was compelling enough for me to want to contribute something.
I don’t have any wisdom of my own that I feel suitable for giving to a stranger so I’ll do the next best thing and share with you a piece of I. J. Good’s that I particularly enjoyed:
“In our theories, we rightly search for unification, but real life is both complicated and short, and we make no mockery of honest ad hockery.”
Here’s a little gem of a quote I unexpectedly found in a song I’ve been listening to for a at least year:
“We can take this huge universe, and put it inside a very tiny head. We fold it.”
I’ve been reading Overcoming Bias (including the posts on the Map/Territory metaphor) WHILE listening to this song WITHOUT knowing what exactly the quote said! I’m not a native English speaker, so it’s sometimes hard to me to decipher spoken words correctly — imagine my surprise when I finally googled the phrase up!
Oh, and if you’re into electronic music, you might like the track as well.
“Shpongle — Around the World in a Tea Daze.”
Happy birthday!
Well, if you ask for important stuff…
If you’re able to explain me how Existence can exist rather than not exist, without any such circular definition, that would make my day. Month. Life… whatever.
Descartes already tried, but in the end he was quite muttering (he’s was already rather old when I was a kid) so I couldn’t hear his last thoughts.
Anyway, happy birthday Shane!
I must say to you: Thank you for your Machine Super Intelligence thesis! It allowed me to understand the Solomonoff induction implications.
Ask if you need help with anything. You are great.
Think deeper on Friendly AI. We haven’t finished the conversation at The Magnitude of His Own Folly, and I still can’t quite grasp your perspective on the problem. It’s too important a question to freeze the discussion over problems of presentation.
Shane, I know about you only what I read on your blog and a few comments on other blogs. I can however think of one small thing to offer as a response to your request. The SL4 mailing list used to have a higher quality than it does now. It also used to hold posters to a higher standard than it does now. Specifically, there used to be a role called the List Sniper who was authorized to ban posters from the list (and to take less extreme actions to control the quality of posts). My present to you — the only one I can think of in the 15 minutes since I read your request for presents — is to ask you to consider anew whether a Mailing List Sniper is a good idea or not.
Mark: Thanks for the quote.
I think there is a place for both. Much AI research is limited in its application due to being too ad hoc, while something like universal AI is limited by being too impractical. Most of us want practical and yet very general, however these seem to work against each other.
Vladimir Golovin: Thanks for the quote — it expresses a lot of the essence of intelligence in an elegant way. I like it.
Even for a native English speaker such as much self working out song lyrics can be really difficult. I often have to google.
Laurent: Hehe… yeah, that’s a hard one.
Shane: As you point out, the tension between the general and the ad hoc is real and tricky to navigate.
I think at some level this tension is due to the nature of the problems being solved. Fred Brooks notes in his “No Silver Bullet” that there is an important distinction to be made between “accidental complexity” and “essential complexity” of a problem (at least when it comes to programming).
It seems to me that at a very high level AGI has got rid of all the possible accidental complexity of the AI problem. What needs to be solved in a general AI problem can be stated very simply in terms of an optimisation. However, for any particular, practical problem there is essential complexity that cannot be overcome – representational choices, data processing, noise, etc. (I suppose this could be nicely formulated in terms of algorithmic information theory.)
I see this essential complexity may be the “complicated” that Good speaks of in his quote and sometimes the only way through it is with some good general principles driving some honest ad hockery.
Ivo: thanks! I really put a lot of time and effort into trying to explain things as clearly as possible without leaving anything too important out. I’m glad to hear that you have benefited from my efforts.
Vladimir Nesov: I figured there was no chance of changing your mind, so I figured I may as well move on. But as you requested it, I’ll write a blog post to sum up my position shortly.