Geoff Conrad examines the problems associated with marrying new computer architectures with old and new languages and making them a practical proposition for ordinary users. Now that multiprocessor systems are falling in price and becoming more generally available, users are finding that it is not easy to write applications to take advantage of the parallel […]
Geoff Conrad examines the problems associated with marrying new computer architectures with old and new languages and making them a practical proposition for ordinary users.
Now that multiprocessor systems are falling in price and becoming more generally available, users are finding that it is not easy to write applications to take advantage of the parallel architecture. The idea of doubling processing power by simply by doubling the number of processors is very attractive – especially for the salesman, who sells one extra processor for the first doubling, two for the second, four for the third, eight for the fourth, 16 for the fifth….. and one or two companies like Tandem Computers, with the added cachet of fault tolerance, have been able to play the doubling-up game very effectively. But unless the multiprocessor is simply used to run different jobs simultaneously on each processor, true parallel processing must be used: the restructuring of the existing code of a single job to spread it over a number of co-operating processors.
And just about everyone seems to be taking advantage of the benefits of multiple processors in terms of getting much higher theoretical performance from a given investment in technology, from DEC, wanting to make the VAX look man enough to stand up to IBM’s 3090, through cheap entry-level Crayettes, to full-blown supercomputers – even the finally announced ETA-10 seven to 10 gigaflopper from ETA Systems Inc (90% owned by Control Data Corp) has 10 processors in maximum configuration. And to take better advantage of these latest supercomputers, software developers are having to write increasingly intricate programs that are fine-tuned to details of the system hardware. So one of the most pressing questions associated with the powerful new processors is how to reduce application development costs, to make them a practical proposition for ordinary users. They may all run Unix so that there is no shortage of applications, but why should one buy a multi-million pound supercomputer if the same applications will run just as fast when spread around several much cheaper minis? Researchers at the Center for Supercomputing Research and Development at the University of Illinois have been working on the problem for the past 12 years and have come up with five crucial objectives that need to be tackled:
1) Programs: the ability to use old programs that use the sequential algorithms of old languages while at the same time running new programs using parallel algorithms in either old or new languages.
2) Languages: new languages need to be developed that allow the developer to express, in a wheel-structured form, algorithms amenable to parallel processing.
3) Compilers: have to be developed that can be tailored to each machine to exploit effectively all the available architectural features, to develop and compile programs in both old and new languages.
4) Algorithm libraries: libraries of standard, reusable application packages and routines using parallel algorithms for standard problems that can be easily incorporated in new applications.
5) Environments: A profitable, effective programing environment for using such software interactively, debugging programs and displaying results graphically in real time.
The first of the university’s five objectives would allow users to approach the new machines without having to rewrite their programs in a new style or a new language. The ability to use existing languages makes for an easy transition from an old machine to a new machine and will hopefully reduce the trauma to a level that will increase the acceptance of the machines. The university’s second and third objectives, taken together, would allow users to learn and exploit a new language, especially if the program development system could translate the old language to the new. Language evolution would occur as the user moved from familiar old programs to new high-performance programs that would be easier to understand.
New language features alone are not enough to make
a user accept a new machine: the new languages should permit the user to make assertions about the program that allow faster execution, or preferably the development environment should query the user for such assertions. Packages and library routines are nothing new, but when they are integrated with program restructuring techniques they would play an important part in the new and powerful program development systems mentioned in the fifth objective.Everyone has his own idea as to which language to use and the fruitless debate has gone on for years and shows no sign of ever stopping as most people continue to give vigorous and die-hard support to the language he grew up with or is the most familiar with. As it takes a considerable expenditure of time and effort just to be able to make a comparison, any change is going to be slow in coming. At the University of Illinois they have obtained spectacular results restructuring Fortran code to run on multiple processors – a language with the widest range of scientific applications and one that is particularly difficult to restructure automatically. For this reason, and others, they are mounting a major effort to develop powerful restructuring tools to work with a wide range of languages, even those with explicit parallelism, as the latter feature is rarely exploited to the full except in specialist machines like the Bolt Beranek & Newman Computers’ Butterfly.