If the Japanese Fifth Generation project is to lead to any quantum breakthroughs in computer science, the likelihood is that neural networks will lie at the heart of development. The basic neural networks that have already been constructed in the US and Japan come much closer to the imitating the learning process in humans than […]
If the Japanese Fifth Generation project is to lead to any quantum breakthroughs in computer science, the likelihood is that neural networks will lie at the heart of development. The basic neural networks that have already been constructed in the US and Japan come much closer to the imitating the learning process in humans than any earlier type of computer system, and, with much less publicity, as many first are being achieved around the world as in the much touted superconducting technologies. Animal nervous systems are marked by a very high degree of connectivity, with each nerve linked to hundreds or thousands of other ones, and by the presence of both excitory and inhibitory impulses, and a neural network computer seeks to imitate this relationship, creating a complex network of electronic neurons and synapses. AT&T and NEC have each described chips they have developed as building blocks for neural network computers (CI No 658), AT&T researchers simulating the connectivity of a nervous system by building a crossbar switch on a chip: the grid-like switch – actually an associative memory chip with each neuron representing one bit – enables all the signals in the circuit to interact with all other signals. Amplifiers act as exciters and resistors function as inhibitors. The chip is programmed by forming resistors at the appropriate points with electron-beam lithography. By using resistors rather than transistors as the inhibitors, the circuit can be made much smaller, and AT&T claims it can get 256 neurons onto a single chip. The idea is to program the computer with a basic framework which can be altered by experience as it begins to learn. A learning system must be able to generalise its experience to unencountered events, so that when the computer receives input it has not previously been taught, it has to work out how to respond, by following a set of learning rules provided by the programmer. The learning occurs through the modification of the connections between processing units – analogous to the synapses between human nerve cells.
Read text aloud
A working example is the US NETtalk system, which learns to read text aloud. The system begins with no knowledge of word pronunciation, but gradually learns to read words of text verbally through exposure to speech and words in a dictionary, with connections between neurons reinforced as it gets closer to the correct pronunciation. The system was developed by a biophysicist at Johns Hopkins University collaborating with a psychologist from Princeton. Most work is being done with simple systems in which a single layer of parallel processing units serves as both input and output; while these permit only simpler models, the models are easier to build and understand. But at the University of Urbana-Champaign, researchers are working on complex multi-layered systems in which input and output are separated. This gives the computer the ability to respond in a variety of ways to instructions. These neural units respond selectively to the presence of stimuli in ways suggestive of self satisfaction! Neural networks have an almost unimaginable number of practical applications, of which complex pattern recognition, reading obscure details from fuzzy satellite photographs and enhancing them to a much greater extent – and much more quickly than is possible now – being one of the obvious ones. But they clearly hold out the certainty that in a decade or two, a robot controlled by a neural network will, for example, be a safer and more reliable truck or train driver than a human could ever become.