Computers can now learn from their mistakes and this will turn the digital world into a new era in 2014, according to the N.Y. Times print edition today. The vision of artificial intelligence is now real.
The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term ‘computer crash’ obsolete.
This all relates to the technology that would come when systems are self-aware; systems that perceives their environments and takes actions to maximize their chances of success. The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do base on the changing signals.
A new generation of artificial intelligence systems will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning; the biometrics age is fast developing facial, iris, and palm sensory recognition and voice characteristics...
‘We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,’ said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
Instead of merely being programmed to do a series of steps, algorithms have been applied. Last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The result was identification of a cat.
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kabana Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.
The history of the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed; following instructions programmed using long strings of 1s and 0s. It all became memory.
Once data is stored in short term memory while the computer carries out the programmed action, the result is then moved to its main memory.
The new electronic components can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.
It is no longer simple programming but the connections between the circuits are weighted according to correlations in data that the processor has already learned. Those weights are altered as they flow to the chip which causes a signal to go on and changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
‘Instead of bringing data to computation as we do today, we can now bring computation to data,’ said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. ‘Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.’
Qualcomm has said that it is coming out in 2014 with a commercial version of neuromorphic processors, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.
The largest class on campus this fall at Stanford was a graduate level machine-learning course covering both statistical and biological approaches, taught by the computer scientist Andrew Ng.
Terry Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered early biologically, inspired algorithms says, ‘Everyone knows there is something big happening, and they’re trying find out what it is.’