The first idea most people take on board about computers is that they are electronic brains. They aren't. As intelligent entities, computers come a poor second to the garden slug.
Even IBM's "Deep Blue," the massive array of data processing power which - temporarily - beat chess champion Garry Kasparov, would be useless at any task not governed by strict rules.The real world doesn't work like a chessboard, which is why governments and computer firms have channelled billions into research on computers that can cope with messy and imprecise information, like handwriting or quantities such as "many."
In the 1960s and '70s, researchers hoped their work would lead to cognitive processors, capable of independent "thought." Today, most are looking for more modest goals: computers with a newborn baby's ability to recognize individual faces, or a slug's ability to find a cabbage leaf.
The fruits of this research are beginning to find their way into everyday lives. Camcorders which edit out camera-shake and computers which monitor credit-card limits both rely on the artificial intelligence toolkit. Soon, so will desktop PCs.
At the CeBIT computer show in Germany this month, Siemens Nixdorf Information Systems will launch what it says is the first plug-in board for PCs to contain a "neural computer." This is an array of processors linked in a pattern similar to that of neurons in the human brain. A neural computer can't "think" - but it can recognize patterns that would defeat ordinary computers and has limited abilities to learn from what it sees.
One application is matching visual images. AT CeBIT, Siemens Nixdorf will demonstrate a security device which recognizes people's faces. The firm will show off an ATM which compares the customer's face with a stored image.
British Telecom is working on similar technology to match images taken by in-store security cameras with pictures of known shoplifters.
Neural computers are helping doctors to analyze X-rays and credit-card companies to spot unusual patterns of spending.
The systems "learn" a card-user's spending pattern, building a picture against which to match each new transaction.
Dave Mcelyea, of American Express, says there is little new about neural networks. "What look like new ideas are really old ideas being exploited by the ability to do massive amounts of computing."
Enthusiasts say neural computers are good at making sense out of imprecise data. Ordinary digital computers work in a world of "either/or"; neural computers can handle less precise data by acting on the decisions made by many different parts of the network.
One wrong keystroke wouldn't wreck an entire program run by a neural network as it might a program run by a conventional computer. Programmers "train" a neural network to carry out different tasks by adjusting the strength of the connections between processors.
Computer-makers and software firms began producing prototypes of neural products in the late 1980s. So far, American and Japanese firms have done the most, working on ideas like the VCR which learns what you like - it will identify and record items that match your taste.
Siemens' machine, Synapse 1, started life as an attempt to develop automatic controllers for driverless trains. Engineers developed a signal processor containing more than 600,000 transistors.
But if neural networks are still a bit too much like science fiction for the home computer, another piece of the artificial intelligence toolkit is rapidly becoming part of everyday life. This is the charmingly named "fuzzy logic." It's a mathematical trick for handling imprecise information.
If you're teaching someone to drive, you don't say, "Hit the brakes 72 feet from the stop sign." You say, "Slow down now."
"That's an example of fuzzy logic," says Kenneth Karnofsky, of MathWorks, a U.S.-based firm which markets a PC program, Matlab, for developing fuzzy systems.
Fuzzy logic was invented in the mid-1960s by Lotfi Zadeh at the University of California at Berkeley. It handles information by ascribing a probability to whether something belongs in one set or another.
Take the definition "tall." A conventional computer might put everyone over 5 foot 8 inches into the category "tall," with everyone else "`not tall." A fuzzy program will take into account the "`membership value" of each individual in the set. Thus someone who's 5 foot 9 might have a 0.1 membership of the category "tall," while a six-footer might have 0.9.
This ability to create a mathematical value from imprecise terms such as "hot" or "heavy" turns out to be useful for controlling processes in the real world.
For example, a thermostat controlled by fuzzy logic has a degree of anticipation: it will turn the heating down before a room gets too hot, rather than switch it off immediately the temperature exceeds a pre-set point.
Although invented in the U.S., fuzzy logic first burst into public awareness in Japan. At the end of the '80s, consumer electronics firms looking for a new gimmick unveiled a battery of fuzzy-logic devices, including vacuum cleaners, camcorders and rice cookers.
The vacuum cleaner senses the amount of dust it's sucking up and adjusts power automatically. IBM launched a system for detecting fraudulent insurance claims which relied on fuzzy logic.
"First there was hype, then skepticism," Karnofsky says. "We're at the stage where people are starting to have a realistic expectation of what fuzzy logic can do."
On the desktop, fuzzy logic is starting to help people searching on the Internet, where minor typing errors can cause major headaches: everyone who has logged in knows the frustration of screwing up because of a missing backslash or a misplaced space.
In general, says Karnofsky, fuzzy logic is better than neural computing at handling problems with smaller numbers of variables. But the future is likely to see a blend of the two technologies in smart domestic and industrial appliances.
The thinking machine, however, is still a long way off. Which - for chess players, at least - is probably just as well.
(Distributed by Scripps Howard News Service.)