Neural networks are beginning to crawl out of the lab and into commercial applications, and Beaverton, Oregon-based Adaptive Solutions Inc is hoping its new CNAPS System, the first system based on its Connected Network of Adaptive ProcessorS neurocomputing architecture will give the process a big push forward. The company claims that its new neurocomputer will […]
Neural networks are beginning to crawl out of the lab and into commercial applications, and Beaverton, Oregon-based Adaptive Solutions Inc is hoping its new CNAPS System, the first system based on its Connected Network of Adaptive ProcessorS neurocomputing architecture will give the process a big push forward. The company claims that its new neurocomputer will speed learning in neural networks 1,000-fold, so that even compared with a Cray 2 supercomputer, the CNAPS System executes industry-standard back propagation algorithms more than 100 times faster. The company sees the system being used in pattern recognition problems in optical character recognition, machine vision, speech recognition, robotics and process control and financial forecasting. The CNAPS System consists of a CNAPS server, which is a neurocomputer for use on a Unix network and designed to provide the speed required for both training and execution of real-world applications, and CodeNet, a software development environment. Mitsubishi Electric Corp and Sharp Corp are already using the architecture for Kanji character recognition. The N64000 chip is being fabrucated by Inova Microelectronics Inc. The CNAPS server has 256 processor nodes operating in single instruction multiple data mode and have broadcast interconnection, and is designed to be linked via Ethernet to a Sun Microsystems Inc Sparcstation. The company claims that the CNAPS server will run in learning mode at more than 1,000m connection updates per second so that it can train the NetTalk text- to-speech processing network in six seconds, compared with over four hours on a Sparc-based workstation. Peak performance in feed forward execution mode is claimed to be 5,120m connections per second. The CodeNet suite includes the CNAPS Programming Language parallel ass embler, CNTool graphical interface, and debugger, and library of common neural network algorithms. CNTool includes interactive and batch user interfaces and a C library Applica tion Program Interface for access in an embedded application. The algorithm library includes Back Propagation, Learning Vector Quant isation, Self Organised Mapping and Frequency Sensitive Competitive Learning. The CNAPS-C C compiler designed for the CNAPS architecture is also planned. The complete system will be $55,000 from the fourth quarter; C will cost $950.