Σάββατο 6 Φεβρουαρίου 2010

Science fiction at its worst

I absolutely love scifi and I recently finished watching all the old episodes of Star Trek the Next Generation. I then went on to Star Trek Voyager and I'm now half way through the third season. There are quite a few things I don't like with most of the episodes, lack of imagination being the major issue. However, nothing ticks me off like the blatant disregard of the audience's intellect.

Voyager has quite a few episodes where the decisions taken by the captain and her officers go beyond the melodramatic, bordering on idiotic. Logical fallacies and plot gaps are common, but the last episode I watched exceeds all boundaries and got me cursing at the morons who wrote this junk.

For those who know nothing of Voyager, it is a ship that is stranded on the other side of the galaxy, trying to make it home. "Future's end" is a two-episode saga summarized as follows: Voyager encounters a small vessel with one human, who travelled back in time from the 29th century to destroy them, in order to avoid a catastrophy in his time. A battle ensues and both ships travel back in time. The time ship goes back to 1967 Earth and Voyager to 1997 Earth. The crew find the captain and learn that someone else had found his time ship and exploited its technology to start the computer age in Earth. This clever dude tries to use it to travel to the 29th century to steal more technology, but he doesn't know how to use it, so Voyager has to stop him to avoid the 29th-century catastrophe. They do stop him and are stranded in 1997, but immediately, the unthinkable happens.

The same time-traveller emerges from a time rift and tells them that 29th-century humans monitor time and discovered that Voyager was in an age where it shouldn't be. He has never met them, since he "never experienced that time line". He instructs them to return back to their time, but he refuses to help them get home faster. What he says, is unbelievable: "We can't intervene. Time travelling prime directive".

Ok, now let me get it straight.
- The same people who do not hesitate to send someone back to destroy a ship, have a prime directive that does not allow them to move the ship to another sector.
- Time causality paradoxes are supposedly understood as loops (A leads to B, leads to C, leads to A). BUT, it is possible to change C, and then another "time-line" emerges. Well, in that other time-line, the original meeting should have never occured and the crew should have found themselves back in their original time and space, remembering nothing of the meeting.

You can't have your cake and eat it too. In scifi, the reader/audience expects you to present your own assumptions. The more far-fetched, the better, because we want the journey to a different reality. But YOU CAN NOT ignore your own assumptions. Either go with the alternate universe assumption (changing the past splits the universe in two), or stick with the fatalistic causality loop assumption. You can not depend on the audience forgetting everything you have told them.

Critical thinking is becoming a rare asset. TV depends on its eradication in order to increase profits. The example I just described is blatant, but we are constantly being conditioned to accept more and more melodramas, unsupported claims, propaganda. The trend should have every thinker out there worried. The least we can do is speak out against these idiots, whenever they insult our intelligence.

Τρίτη 26 Ιανουαρίου 2010

The next big thing in Artificial Intelligence?

One of the few things that have consistently bugged me for many years is our inability to create really smart artificial intelligence. Just have Google translate a page for you and you'll see how bad natural language recognition really is. You don't even need to go that far. We don't even have a decent personalized software agent that will present us with the most interesting feeds and learn to filter out what we don't care about. Image recognition, speech synthesis, even a robot to clean your house seem to be incredibly difficult to achieve. Why?

Well, these types of problem simply can not be solved with the traditional software methods. The number of calculations necessary to simulate even the most elementary of neural networks is ridiculous. We could possibly build an excellent translation system with very complex algorithms, but we'd have to run it on the cloud to get the results within an acceptable time frame. Computing power may be cheap, but you still need to go through all the steps. One could argue that even a human brain is not that efficient at translations, so let's try using a much simpler example.

In order to accurately calculate the trajectories of all planets in our solar system, you need lots of complex code, executing a large number of operations. However, nature seems to do it all instantly. No matter how many planets you add, there seems to be no cost whatsoever in calculating where each planet will be in the next time interval. How is that possible?

In nature, the laws are immediately obeyed. Probabilities for all possible states seem to be calculated instantly and the bodies just 'know' where they are supposed to go. If we could simulate such an algorithm, it would always take the same time to execute, no mater how many planets we added. Amazing eh?

Well, we can get pretty close, if instead of software, we use hardware. Analog circuits have already been built to solve various differential equations. The leap from these humble beginnings to artificial networks of interconnected transistors is not that difficult. Or is it?

Our brain has billions of neurons, each with millions of connections to other neurons. Its size is tremendous. Just imagine trying to graph an integrated circuit with the same characteristics. Our current processors may seem complex, but they are essentially composed of repeating patterns. Compared to our brains, they are trivial. How would one go about building a circuit as complex as our brain?

Well, first you need to develop the technique to graph a billion components (say transistors for now) and interconnect them with each other. That will probably not be easy, but grafting in three dimensions is already possible. Then, you need a way to train this network, to do what you want it to. Now, that's hard! In essence, you need a circuit that will be able to modify the strength of the interconnections between the various transistors that compose it, via a feedback loop. Since we don't really understand the structure of our brain, we would probably also need to use genetic algorithms to 'evolve' the circuits. The idea is to present a number of such circuits with a problem, select the best performers and 'breed' them to get the next generation of candidates. After a large number of iterations, you will end up with a hardware neural net, adapted to solve the particular problem.

These circuits will not be general problem solvers like our brains. But even our brain is composed of various interacting parts. The more we understand their functions, the more we can mimick their operation. When we accomplish this, interconnecting the various modules will bring us very close to our holy grail.

Now, there is certainly some research related to these ideas, but to my knowledge, most of the research in AI is still done on theory and software. The amazing thing about hardware solutions is that they take advantage of the universe's amazing ability to simply 'know' what needs to happen next. All the inputs to a particular artificial neuron will 'magically' be added. Its output will be instantly calculated.

In one paper I read, there was a lower bound to the response time you can get from such circuits, since you have to wait for them to 'rest' before you read the answer. I don't know enough to say if that limit can not be overcome, but I expect that using such a technique, we can get artificial brains much faster than with our current methods.

Then again, maybe I'm just an ignorant fool.

Smart processes or smart people?

Found an interesting series of posts called "Enterprise 2.0: it's not about people it's about process". You can read Part 1 and Part 2. The author talks about Web 2.0 tools and how they can be used in the enterprise, focusing on their possible integration into processes. As I commented there, I come from a completely different environment, where ridiculously small budgetary, time and resource constraints force us to innovate constantly. I have seen how carefully designing a few key processes while leaving the rest up to educated, intelligent agents tends to give the best results.

Given proper motivation and guidance, people will amaze you with their resourcefulness. They will find ways to solve problems that no architect or business analyst could ever envision, reusing the tools at their disposal and creating new sources of information, processes etc. The question is how to empower the employees without totally losing control.

Well, you need to closely monitor the things they do, how efficiently and correctly they do them and determine if there's something you can do to help them do it more efficiently. That's the missing step in all the projects I've
worked in. Managers rarely know how exactly things are done, or how IT could help them do it better. But if you get a technical person to sit next to a call center agent the problems and the answers become immediately apparent.

Email, unstructured task assignment, document libraries and ad-hoc questions to more experienced agents are some of the tools you'll see them use. Blogs, twits and whatever else may be useful will gladly be used as well, as
long as it helps them do their job. No one needs to tell them to do it. If they know it exists and are given time to learn how to use it, they'll find ways to use it. But it is OUR job to see what they do and determine if a
particular activity is worth automating/integrating/replacing.

I totally disagree that complex, predictable processes provide the solution. Just try dealing with Oracle's technical support and you'll realize that even the best processes can not replace a knowledgeable agent, empowered
with the proper tools. In my experience, the cost of a 'perfect' process far outweighs the benefits.

I used to make the mistake of considering chaos a bad thing, the opposite of order. In fact, it is chaos that creates order. Complex systems in nature were not designed, but emerged. As humans, we have an inherent need to plan and organize, to make sense of the world we live in. But the world is far from deterministic and the larger the project, the more likely you are to feel the consequences. As more and more projects fail, as the drive to reduce costs and increase efficiency constantly intensifies, I expect that you will see experiments in carefully controlled 'chaotic' processes.

The first large system I ever designed was a 'ticketing' system. It was supposed to simply log agent-customer interactions in the telco I worked for. A key feature was the ability to assign the ticket to another person or 'group' within the company. Well, within a year, the tool was used to control an inconceivable number of processes, many of which did not directly involve customers. The users created virtual groups to categorize tickets (misusing the already available ticket type, which was always left to its default value). Some very interesting things happened in the company. People suddenly had a means of proving that they were doing their job and that the bottleneck was a different department. Tickets were 'hot potatoes' that nobody wanted in their inbox. If an issue took too long to resolve, the ticket of history would tell you what went wrong. In essence, the ticketing system had become a replacement for emails and helped identify several processes that were later implemented on our legacy system.

Our system gave agents an unthinkable level of freedom. They could essentially compose and provision products that did not exist and that our billing system did not know how to bill. Insane, right? Well, with a team of two developers, that was the best CRM we could provide. But guess what? The company has since introduced Siebel CRM, keeping much of the original ticketing system's functionality and orders for corporate products (VPNs etc.) are still being entered in our legacy CRM. Why? Because they are so complex, that it is very time consuming to configure them in Siebel.
Were the processes that emerged the most efficient ones? Probably not. Did we get errors? Certainly. Did we get the job done with the minimum resources? Definitely.

I anticipate that this drive to replace smart people with smart processes will soon reach its limit. Dummy agents can not deal with unexpected circumstances and no matter how smart the processes get, it is simply impossible to predict all possibilities. Maybe it is time to start thinking about providing smart tools that handle business events, one at a time. Maybe there are only so many processes that are repetitive and predictable enough to be fully automated. Maybe we should plan for ad-hoc, manual intervention, instead of considering it necessary only for exceptions. What if exceptions end up being the rule and the 'happy path' is in fact an exception?