Looking over my predictions for the teenies from a year ago, they already look pretty lame. Take 1/3 off USA’s PPP GDP and you already get China, the latest Sony portable device has a 4 core processor, Intel’s latest set of CPUs are once again pretty awesome, schemes to let you pay for stuff with your phone are already getting under way (via both screen bar codes and near field communication), and a graphics card review I read the other day noted that the most graphically demanding games on high resolution monitors with all the graphical bells and whistles switched on now run very well on the latest “mid range” graphics cards.
At the time that I made my teenies predictions I thought they seemed a bit like predicting the obvious. But I’m now starting to wonder whether many of my predictions could have been more tightly assigned to the following 3-4 years, rather than the next decade.
One thing I’ve noted in the past is that it’s usually easier to predict fundamental things like FLOPS per dollar than is it to predict how these technological fundamentals will translate into applications. That might be true, but knowing that your computer of five years hence will have X bytes of storage and perform Y computations per second is a bit abstract for most purposes. What will be the new toys, the new applications, the new businesses? These are the things that impact people.
If predicting specific applications is a bit much to ask for (and if I could I might not want to tell you!), perhaps the next best is to predict the general nature of applications during a period of time. What you might call the “technological theme” of a period.
1980 to about 1995 was the period of the PC. Starting with hobbyists and niche applications and spreading to take over a large chunk of the office. The IBM PC marked the point at which this went mainstream. The defining characteristic was that the communication was typically local, if the machines were networked at all.
1995 to about 2010 was the period of the internet. First emails and basic web pages, search, then ordering online, online banking, music, video, etc. Netscape marked the point at which this went mainstream. The defining characteristic was that the communication was now global but the interface with the world was usually pretty traditional: keyboard, mouse, monitor.
So what’s the next theme? Mobile internet might be an answer, but I think that it’s more general than that. As great as the internet is, most of the important stuff still occurs in that other place called reality. Maybe it’s a new house with a swimming pool, throwing a party with friends or coming down with a serious illness. I think the next theme will be for technology to interface more effectively with the world, being mobile is only one aspect of that. If I had to pin the start of this going mainstream on one thing, it’d say it was the iPhone as that’s when the internet started to show up in the day to day moments of people’s lives as they’re out and about doing things.
Once the location, state and function of many everyday objects starts to spread onto the internet, all sorts of creative efficiencies become possible. Need to pay for the coffee? Just press a button on your phone. Not sure where your car is? Ask your phone to show you the way. Need a cab or a pizza or… just select what you want on some menus on your phone. Prices, special deals, time you’ll need to wait — it will all be there. Need to keep a close eye on your health? Get a small sensor implanted that monitors your blood insulin, oxygenation, pressure, cholesterol, heart rate and so on and wirelessly updates this information to your phone. Should a problem arise, your phone can let medics know where you are and what the problem seems to be.
I don’t expect this to be a sudden change, but rather a gradual absorption of goods, services and various everyday objects into an all pervasive information network. I think this will be a hot area until about 2025. Yeah, it’s going to take a while, not so much for many of these things to become possible, but rather for them to become cheap enough to be economic.
What’s my pick for the theme that comes after that? Well, once you have so much of the economy automated and hooked up together with vast amount of information about anything and everything swirling around, the key leverage point becomes how well you can intelligently process all this in order to control and coordinate things. Thus my pick for the theme from 2025 to 2040 is machine intelligence.
Talking about machine intelligence, “Measuring universal intelligence: Towards an anytime intelligence test” from Hernández-Orallo has just been published in the high impact journal “Artificial Intelligence”
Any difference with your AIQ ?
Some similarities, but also many differences. I’ll have a technical report out before too long and you’ll be able to see the details then.
I agree with almost all of this, though I’d note that actually much of the functionality you mention already exists today. It’s in embryonic, early adopter stage technology, but it’s there. (All examples are on Android, since that’s what I know and use.) You can view menus and order food for home delivery without making a call through SnapFinger, get a nice black cab that you can track in real time on a map through Uber and lock/unlock your rental car and honk the horn to be able to find it through ZipCar.
I don’t know of any device that syncs up with smartphones for health monitoring yet, but the devices like FitBit exist for tracking steps taken, distance walked, calories burned, etc (though it’s not implanted.. I think that will be a significant barrier to widespread adoption that will only be overcome if the benefits are truly massive).
I didn’t know about all of these services (you clearly study this area a lot!), but I’m not surprised by them either — they’re clearly possible today and somewhat obvious. My point is that they will go from being ideas/trials/early adopter to being mainstream in a big way. The same is true for all the other periods I listed: desktop computers existed long before the IBM PC, people were using the internet long before Netscape, and mobile internet was around long before the iPhone.
I think people like to believe that the next big thing is going to be a radical new idea that almost nobody expected. In reality, I think it’s almost always an idea that’s been around for a while and is working already, but hasn’t yet reached the level of maturity, ubiquity, ease and cost that’s needed for it to really become a hit. This is less exciting as a story, but I think it’s much closer to how reality works. On the plus side, if this is true it also means that picking the general areas that are going to be big in the near future is not actually that hard.
I also think that while the obvious applications of integrating the internet with physical reality (cabs, pizzas, etc.) are already under way, once this area is more mature I think we’ll then start to see the more creative ideas in this area.
Thus my pick for the theme from 2025 to 2040 is machine intelligence.
May be but WHICH kind of intelligence?
Logic based godlike infallible AGI is impossible
Monica Anderson dixit and with good reasons, because the world is already “bizarre”, before any kind of Singularity.
I think there will be many kinds. From fancy machine learning algorithms designed to do particular kinds of prediction and control, to fledgeling AGIs. Logic will be a part of some of these systems, indeed logic is a part of our own thought, but I don’t think that purely logical systems will get very far.
A neural algorithm that tries to build an as-small-as-possible recurrent neural network that fits previous observations is a valid approximation of AIXI because RNN are Turing complete.
So there is no need to always oppose the logic behind AIXI and the logic behind neural algorithms .
Not a convincing argument.
It doesn’t matter that RNN are turing complete if the “critical step” to intelligence happens before computation proper is needed.
Computation only occurs over an already encoded model of reality , that is, the bits shuffled around by the computation(s) have a meaning which relates to some ontology of concepts and objects in/of the model.
It is the buildup of this ontology which is the problematic step which is glossed over by current AI research.
This does NOT refers to some would be “mystical properties” of the brain or neurons but to the feature extraction phase of any problem solving method: What are the relevant features you have sift out of the raw mass of data inputs to solve the problem at hand?
E.g. before you set to solve a chess problem you have to “know” what the pieces and the chessboard are and what are the rules.
In chess this is a given which comes with the problem statement, in any AGI problem, beside possibly the goal, the relevant concepts are NOT part of the problem statement (score best in “this environment”) they have to be created.
Of course this is ALSO a matter of computation(s) but WHICH kind of computations?
Best summarized by Nick Szabo:
The most important relevant distinction between the evolved brain and designed computers is abstraction layers.
Being “turing complete” buys you no edge in this game…
Yes, I understand that in our brains, where neurons are recurrent and thus Turing complete, there are several layers, which are increasingly abstract. And each layer can be roughly divided in subnetworks that work in parallel. Each subnetwork of the second layer tries to abstract the results of several subnetworks of the first layer. And so on for the next layers.
The AIXI approximation I’m working on tries to mimick our brain: I use increasingly abstract layers where each brain subnetwork is replaced by a subAI that abstracts its inputs when he outputs a short program that can generate an approximation of its inputs.
Like the subnetworks of the brain, each subAI of each layer can work in parallel.
I’m not sure if I understand your point.
Are you saying that logic and neural networks must be acceptable ways to do AGI because they are (at least suitable versions are) Turing complete?
Yes, in theory they can do it. For example, you could compute the swing of a pendulum using logic. First you’d use the logical statements to build adders, and then multiplication circuits, and floating point routines, and then finally you’d implement the physics equations. Or you could use Scipy and solve the problem in a few lines of code. By more or less just writing down the equation.
So what I’m really claiming is that the conceptual model that (classical) logic uses to too far away from what needs to be done.
Logic is an ambiguous word. It seems that when you used it was with the following meaning: using rules like “A is true implies B is true” and “B is true implies C is true” is sometimes used to model an environment but may not be enough to build an efficient AGI. I used this word to describe that the logic found in math theorems like AIXI is not incompatible with making a small recurrent ANN in order to model an environment.
What about the loss of vocal communication once it is possible to communicate directly with a machine (and therefore other people). Who would want to use their voice when you can think to another person or machine? Any idea when this will be possible?
That would require a very advanced brain interface, and it might not be possible at all without implanting something in your brain. So, other than limited examples in people with severe medical problems, I wouldn’t expect to see this for a long time.
Information technologists seem to mistake niftier gadgets and faster computers with fundamental technological advancement. From where I sit very little about our civilization has changed fundamentally in many decades outside of info-tech — in fact in many ways things seem to be going backwards. I’m talking about things like the energy sources that power all your gadgets, the propulsion systems that allow you to get around, and the colonization of space. Where are the great advances in these areas? Without fundamental breakthroughs, we are basically just shuffling information around ever faster, but are no closer to becoming a cosmic species – which from my perspective should be the number one long-term goal of our technological civilization!
The Cosmist says “Without fundamental breakthroughs…”
What is a fundamental breakthrough? Arguably , in science, there is nothing more fundamental than the scientific method itself. Inductive inference, as a math formalization of how to make the best prediction, and AIXI, as a math formalization of what is the best thing to do when you want to maximize a reward, are arguably two of the most fundamental scientific breakthroughs.
As are the recent detailed models of our brain neocortex, which show us a way to compute efficiently a prediction.
They are so fundamental that they won’t change your life this year, but they may change it in the decades to come.
That sounds good and I hope you’re right, but I’m not going to hold my breath waiting for robot scientists to invent fusion power or space elevators! I have a very simple, perhaps primitive measure of the technological level of a civilization: the amount of *physical* stuff it can manipulate — i.e. it’s energy usage. Right now, by that measure, we are on a plateau which is largely powered by non-renewable fossil fuels, and if we don’t find an alternative to that soon the level will start to decline rapidly. This is the elephant in the room that information technologists seem to ignore; without energy all their googleplexes and robot swarms are useless heaps of metal and silicon!
“we are on a plateau which is largely powered by non-renewable fossil fuels, and if we don’t find an alternative to that soon the level will start to decline rapidly”
The answer is in your sentence. Renewable energies already exist and the reason why they are not more common is because fossil fuels are more competitive today. This is changing because fossil fuel extraction is becoming more expensive and it is being more taxed. Also renewable energies are getting more efficient and thus are becoming cheaper. Thus it is easy to predict that renewable energy is our future.
The main brake is political: most scientists of the IPCC have been promoting renewable energies for decades but they don’t make binding decisions. I like to work on the teaching of the scientific method and I hope that it will be one of the first things we teach in school. Then international scientific decisions will become more binding.