IBM UK Ltd has just released two papers in its new series The Enterprise Server Papers which are an examination of the role of advanced computer systems in the 1990s. IBM’s aim is to provoke an objective debate among business strategists in response to the perceived confusion in the market and to offer a view […]
IBM UK Ltd has just released two papers in its new series The Enterprise Server Papers which are an examination of the role of advanced computer systems in the 1990s. IBM’s aim is to provoke an objective debate among business strategists in response to the perceived confusion in the market and to offer a view of the issues involved in the implementation of advanced computer systems. So, have we heard it all before? Or do they have an interesting and a substantive argument, and if so then what do they plan to do about it? Back in October the company issued the results of a study commissioned from the Cranfield School of Management as the fundamental basis for a UKP1m advertising campaign to convince us that the mainframe wasn’t dead, just suffering from an image crisis. Now, the first of The Papers, The IBM Enterprise Systems – an Exceptional Server seeks to dispute the view that the central role of the mainframe is being replaced. It looks at the case for central systems and the role of the IBM Enterprise System in the world of client-server computing.
The paper, written by David Heap, IBM systems consultant, seeks to argue that all large organisations will still need a small number of vital computers – Enterprise Servers – which need to be kept in a secure environment and will handle business critical operations and transactions; and to explain why the ES/9000 family will take the position of mainstay of client-server systems. Rather than asking itself whether or not a company needs a mainframe, the paper argues that firms should be looking at whether they need central systems within their organisations and what the ideal balance between distributed and central data and applications is, and also how to choose the best hardware and software to run them. It seeks to cover the need for business-critical systems to be handled in secure environments; the need to exploit leading edge technology in developing and supporting complex business-wide applications and evolutionary and compatibility considerations – the integration of old and new technologies, for example. It argues that the efficiency of current services should be considered alongside the benefits that can be obtained from new server applications. Secondly, the different benefits that each kind of system offers can affect choice. For example, an MVS operating environment generally can handle a larger number of transactions and has a larger number of service hours available, whereas a personal computer may have low utilisation time, but can be justified through the productivity offered to the individual user. The second of the papers, Towards the Exceptional Service Data Centre written by David Restell, IBM survey consultant, discusses the results of a survey of a representative selection of 145 UK data centres, partly in response to requests by companies that wished to get an idea of how well they were actually running their data centres; and the economies of scale that can be achieved through the consolidation and exploitation of existing technology to its full advantage.
By Abigail Waraker
The data centre is seen as the ideal focus for the study because outside this area companies are run very differently. The main costs involved are the software and hardware and the staff required to service that technology. Graphs plotted of relative processing power, as a measure of computing power, against the number of people involved in delivering that service found that some companies, for example, require 75 staff to manage 50 units of processing power while in others the same number of staff can manage 300 units of processing power, even though in both cases staff are all sufficiently busy. Or, looking at the figures another way, some companies manage the same amount of hardware, but one might be using 60 people while another has 260 to do the job. If UKP40,000 is taken as a typical per-person planning cost, then this can make a substantial difference to the cost of running the department. The reasons for this in some cases are found in the age o
f the computer department. Historically, the older the centre the more staff are hired to maintain the facility, while newer centres have hired fewer staff for the same amounts of processing power. Managers still have the tendency to run data centres the way they ran when they themselves were technical staff. However, the report argues that this need not be the case and that the way the centre is organised creates the amount of work. There seems also to be a lack of awareness of what the core business tasks of the operation are. In other words, although managers are using all their staff and they know that they need them, they are not aware of exactly why. Other influences involved include people carrying out tasks that could be done by machine as well as the fact that companies may have a hierarchical structure in which the importance of a manager’s position is determined by the number of staff he or she is in charge of. The report then goes on to discuss the issue of distributed computing and the issues of the economies of scale that can be achieved. As distributed processing support staff are often spread over many locations and departmental budgets it is less easy to calculate the cost of their service, while data centres can be more easily identified in a single budget. There is also apparently a strong desire among information technology management to consolidate the installations to gain greater economies of scale, but this is not possible in distributed processing environments as processors with adequate total capacity are not available. But, here the trusty IBM ES/9000 can come to the rescue because it does not suffer from these problems – its biggest model having up to 200 times the processing power of the smallest.
Pure common sense
Evidence is also offered that more computing power can be managed per person the nearer the system is to full Enterprise Systems architecture and that of those that have installed both IBM and non-IBM systems, the IBM systems require fewer support staff. The problem with these discussion papers is that the arguments are not substantiated with enough figures, so it is difficult to determine what the real and relevant issues are. Lacking such information, the reports seem all too conveniently to indicate that computer departments are making false decisions to devolve their computer facilities away from one central system. The conclusion IBM has drawn from all this is that while there is nothing wrong with being small, having several small systems in one place is difficult to maintain, especially if each has a different set of procedures. As far as what companies should do in response to such findings, the reply seems to be a rather bland suggestion that a firm question everything it is doing and ensure there is a valid reason for it; to exploit the technology in use to its full and make use of all its available features. Further papers in the series are due shortly on issues including the cost of computing and the Open MVS operating environment. Unless substantiated with further facts and arguments, which the next papers may well provide, isn’t making full use of your technology case of pure common sense?