Τρίτη 26 Ιανουαρίου 2010

The next big thing in Artificial Intelligence?

One of the few things that have consistently bugged me for many years is our inability to create really smart artificial intelligence. Just have Google translate a page for you and you'll see how bad natural language recognition really is. You don't even need to go that far. We don't even have a decent personalized software agent that will present us with the most interesting feeds and learn to filter out what we don't care about. Image recognition, speech synthesis, even a robot to clean your house seem to be incredibly difficult to achieve. Why?

Well, these types of problem simply can not be solved with the traditional software methods. The number of calculations necessary to simulate even the most elementary of neural networks is ridiculous. We could possibly build an excellent translation system with very complex algorithms, but we'd have to run it on the cloud to get the results within an acceptable time frame. Computing power may be cheap, but you still need to go through all the steps. One could argue that even a human brain is not that efficient at translations, so let's try using a much simpler example.

In order to accurately calculate the trajectories of all planets in our solar system, you need lots of complex code, executing a large number of operations. However, nature seems to do it all instantly. No matter how many planets you add, there seems to be no cost whatsoever in calculating where each planet will be in the next time interval. How is that possible?

In nature, the laws are immediately obeyed. Probabilities for all possible states seem to be calculated instantly and the bodies just 'know' where they are supposed to go. If we could simulate such an algorithm, it would always take the same time to execute, no mater how many planets we added. Amazing eh?

Well, we can get pretty close, if instead of software, we use hardware. Analog circuits have already been built to solve various differential equations. The leap from these humble beginnings to artificial networks of interconnected transistors is not that difficult. Or is it?

Our brain has billions of neurons, each with millions of connections to other neurons. Its size is tremendous. Just imagine trying to graph an integrated circuit with the same characteristics. Our current processors may seem complex, but they are essentially composed of repeating patterns. Compared to our brains, they are trivial. How would one go about building a circuit as complex as our brain?

Well, first you need to develop the technique to graph a billion components (say transistors for now) and interconnect them with each other. That will probably not be easy, but grafting in three dimensions is already possible. Then, you need a way to train this network, to do what you want it to. Now, that's hard! In essence, you need a circuit that will be able to modify the strength of the interconnections between the various transistors that compose it, via a feedback loop. Since we don't really understand the structure of our brain, we would probably also need to use genetic algorithms to 'evolve' the circuits. The idea is to present a number of such circuits with a problem, select the best performers and 'breed' them to get the next generation of candidates. After a large number of iterations, you will end up with a hardware neural net, adapted to solve the particular problem.

These circuits will not be general problem solvers like our brains. But even our brain is composed of various interacting parts. The more we understand their functions, the more we can mimick their operation. When we accomplish this, interconnecting the various modules will bring us very close to our holy grail.

Now, there is certainly some research related to these ideas, but to my knowledge, most of the research in AI is still done on theory and software. The amazing thing about hardware solutions is that they take advantage of the universe's amazing ability to simply 'know' what needs to happen next. All the inputs to a particular artificial neuron will 'magically' be added. Its output will be instantly calculated.

In one paper I read, there was a lower bound to the response time you can get from such circuits, since you have to wait for them to 'rest' before you read the answer. I don't know enough to say if that limit can not be overcome, but I expect that using such a technique, we can get artificial brains much faster than with our current methods.

Then again, maybe I'm just an ignorant fool.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου