On the table at last month’s OMG technical meeting was the Unified Modeling Language. Lem Bingley examines the putative industry standard for object application design. Corner me over coffee at a conference and wait for the initial discussion to lapse toward uncomfortable silence, and you’ll probably find that I’ve started to ramble on about software […]
On the table at last month’s OMG technical meeting was the Unified Modeling Language. Lem Bingley examines the putative industry standard for object application design.
Corner me over coffee at a conference and wait for the initial discussion to lapse toward uncomfortable silence, and you’ll probably find that I’ve started to ramble on about software quality. Software quality is an ideal topic to broach with people you’ve never met before, because it’s a fair bet that you’ll strike common ground. Anyone attempting to argue that today’s software is annoyingly bland and perfect would soon find themselves wearing a canvas jacket that buckles up at the back, rather than a little plastic name-badge. Despite much evident belief that quality can be painted on at the last moment, most people, when pressed, agree that it’s better to design it in at the start. In object-oriented (OO) development, where it’s particularly hard to paint over the cracks at the testing stage, the principles of sound design have actually received a reasonable level of attention. And this attention, during what might loosely be termed OO’s infancy, led to the evolution of a host of design approaches and methods (sometimes called methodologies, in defiance of the fact that methodology means, or ought to mean, the study of methods). These methods, on the whole, tackled both the meta-model (the means with which you might express the problem or the software under consideration) and the process (how you should go about creating the meta model, and how it should then be applied to create a real system).
Each of the various methods (and by 1994 some reckon there were as many as 50) was championed by one or more gurus. Often the most vocal proponents of a particular method would be none other than those standing to profit nicely through the sale of a book- of-the-method. Developers and systems analysts tackling OO projects were largely left to their own devices when it came to picking which one to use, and it’s probably fair to say that the methods promoted by the most attractive and readable books did better than the ones with the thick, ugly, expensive manuals. But book-jacket artwork is a poor trestle on which to build a system. What was sorely needed was a narrower range of choice – preferably one big, obvious choice – so that OO virgins would be encouraged to think, Hmm, if I’m going to go all the way, I’d better find out about the world-famous and infallible DeZynItRite Method. Or whatever. This is where the Object Management Group (OMG), setter of big, obvious standards might be expected to step in. And it dutifully formed an Analysis and Design Special Interest Group (SIG), charged with investigating methods. The SIG met for a couple of years, published a thick report of OO methods, and was widely ignored, recalls independent OO consultant Martin Fowler (who also happens to be an author of one of those books-of-the-method that I mentioned, Addison-Wesley’s UML Distilled: Applying the Standard Object Method). Standardization seemed impossible, most gurus claimed it was undesirable, he adds. Most book publishers would probably have agreed that it was undesirable too. But as Fowler notes, the profitable world of the method gurus was about to change forever.
In October 1994, a certain James Rumbaugh left the employ of General Electric to work for Rational Software. Rational’s chief scientist is Grady Booch, and Rumbaugh’s arrival meant that Rational was suddenly able to call upon the combined clout of two of the top-grossing guru/authors. The Booch method and Rumbaugh’s OMT (Object Modeling Technique) were not only both in the OO method top ten, but they also exhibited sufficient synergy for the two authors to contemplate unifying them into a single, third-generation method (Booch, in common with most of its competitors, had already undergone one major revision). They plumped for ‘The Unified Method’ (UM) as a suitably portentous title. That event sent methodologists into
a flurry of panic, Fowler says. At one point Rational was saying it was going to achieve standardization the Microsoft way… Elsewhere methodologists formed an Anti-Booch Coalition. It was all very amusing. The fuss was sufficient to encourage the OMG to reanimate the dormant A&D SIG. Clearly, the OMG did not want Rational setting the standards without some outside influence, Fowler notes. After a year’s work, the first draft of the method was duly unveiled at OOPSLA (the OO Programming, Systems, Languages & Applications show, which in October 1995 was staged in Austin, Texas). The method was labeled UM 0.8. The 0.8 designation was deliberate, says Rational director of product marketing Adam Frankl. It was to demonstrate that it was not a final draft, that the goal was to illicit comment from the industry, he adds, attempting to disperse the image of Redmond- inspired, lump-hammer standard-setting. But at the same OOPSLA show, Rational revealed that it had another blow to deal the rest of the gurus – Ivar Jacobson was to join the UM party with Rational’s purchase of his company, Objectory. Jacobson, best known for inventing the Use-Case technique for modeling business-processes, brought with him his OOSE (Object-Oriented Software Engineering) method. With three A- list authorities behind it (the ‘three amigos’, as some wag promptly dubbed them), it began to look like Rational really could put some Redmond-style arrogance into its standardization effort (whether it liked to admit it or not). During 1996, the three worked together on UM, changing its name along the way to make it the Unified Modeling Language (UML). The change of name was an important distinction. The amigos had apparently decided that the world needed not a unified end-to-end method – but simply a standard way of expressing the meta-model (in terms of notation, semantics and diagrams) without the accompanying process or procedure. A much smaller piece of the puzzle.
Rational’s Frankl sounds distinctly evasive as he explains why the sights were lowered in this way. Let’s just say that this was a solvable problem, he remarks. In turn I’ll remark that Rational has since launched it’s own process (dubbed Objectory in remembrance of Jacobson’s company) which seems to be quite closely tied to its Rational Rose line of tools. Documents for versions 0.9 and 0.91 of what was now UML duly emerged, with Rational claiming that it was carrying out an open process and encouraging others to participate. The first sponsor was Microsoft – it wanted to be sure that UML would support DCOM, says Frankl. Similarly, Oracle became interested in ensuring that UML mapped onto its technology. Hewlett-Packard also got involved very quickly. HP already had its own method – called Fusion – and wanted to adapt that process to work with UML notation. In addition to those companies, we had received about 1,000 comments at that stage. But basically, it was the three amigos personally hammering things out. Not everyone agreed that this process was working – especially with such big names pulling the spec toward their own proprietary ends. A coalition of disgruntled experts – mostly individuals, with Ed Yourdon probably the best known name on the roster – clubbed together in an attempt to form a rival power bloc. The resulting OPEN consortium (not to be confused with Unix champion the Open Group) has come up with its own end-to-end method, and its own UML- rivaling modeling language, called OML (the OPEN Modeling Language). In a recent white paper, the consortium divides methods into those based on extensions of data modeling (it cites OMT, UML, Coad and Yourdon as examples) and those based on behavioral or responsibility modeling (examples being RDD, BON, MOSES and OPEN). It basically seems to be claiming that the former group, and thus UML, can’t ever be truly suited to object modeling because objects imply a fusion of data and process. Intriguingly, it appears that the relatively low-profile consortium ignored a big opportunity to ga
rner widespread support. During the gestation of UML, the OMG SIG decided to take an active role, and issued an RFP (request for proposals) to cover OO design notation and process. In January 1997, five separate groups answered the RFP. Version 1.0 of UML was joined by submissions from IBM and ObjectTime, Platinum Technology, Taskon, Softeam, and Ptech. But no OPEN consortium.
The OMG doesn’t pick standards from a bunch, so for the last year we’ve worked with the others to reach a compromise, says Frankl. IBM agreed to back UML, once we’d adopted some of the elements of its original proposal. This process has succeeded to the extent that [at the OMG’s Dublin summit] there’s been just a single submission – UML. The voting process will now last until December, and we expect UML 1.1 to be ratified in January . With OMG backing, OO consultant Fowler believes that UML will have little trouble carving out a dominant position. Any other method will be a niche player, if it survives at all, he reckons. The OPEN consortium, if it fights on, will be a minor player. Not everyone’s ecstatic about that vision of the future. If one disregards the philosophical/architectural objections of the OPEN consortium, most of the objections to UML centre on its scope. As you might expect, UML either does too much, or too little, depending on the disgruntled observer’s point of view. From Steven Law’s perspective, UML doesn’t seem to cover enough ground. Law is managing director of Lincoln Software, and Lincoln’s Ipsys Engineer toolset tackles the development cycle from data modeling to code-generation. Law says that Ipsys will support the core parts of UML from early 1998. UML provides a way of modeling a problem – a way to collect the details of the problem and present them, he comments. But there’s no clear view of how you would link that diagram into a concrete application, accessing a real database. There has to be a solid understanding of how to link down to that level, he argues. Our priority is to provide a solid connection between the top-level analysis and the underlying code generation – so that links can be traced from particular pieces of code to particular parts of the model. When you can no longer trace that link you’re in trouble from a maintenance perspective, Law believes.
Mark McGregor, VP of business strategy with modeling tool vendor Popkin, adopts something of an opposite stance, but still doesn’t seem to like UML that much. We’ll support UML – because we have to. The market will demand it, he states. And as it stands, UML will be okay. But the signs are that it will expand. The problem will be that the constraints for a commercial system running against a relational database won’t be the same as for a real- time embedded system. So a natural concern is that UML will end up falling between two stools. The variety of different applications found in the real world was, of course, one of the drivers behind the diversity of different methods that grew up in the first place. But Frankl argues that all object applications – by virtue of their object-oriented nature – are actually more similar than one might suppose. One of the promises of OO is that by unifying data and function you can maximize the applicability of components across various systems, he says. There are, therefore, important similarities between OO developments, whatever their basic purpose. And Frankl adds that there will be extension mechanisms in UML to add function where it’s needed for specific tasks. Fowler is similarly optimistic that UML can cover all bases. For [users] the rise of the UML’s notation is good news because the standardization will move the debate onto the interesting issues. It won’t mean that the innovation in object techniques will end. Instead, people will be encouraged to add new techniques as extensions to the UML. This will make it easier for people to take them up.
But McGregor worries that this approach is just as dangerous as falling between stools. UML seems to be trying to be all things to all men – it’s in danger of getting too big and unwieldy, he reckons. The OMG is currently examining a release, but we know that Rational and Oracle are in discussion, and they’re looking at data elements and process, aiming to include them in UML. Our concern is – are the books going to get so big that no-one’s going to want to wade through them? Maybe UML will become a massively rich language, with specific subsets that address different parts of the market, McGregor concedes. But will that actually be any better than having different methods? We’d like to see UML kept as simple as possible, to encourage as many people as possible to adopt it. If UML becomes too complex, and people avoid using it, then we’ve wasted a great opportunity to improve the integrity and quality of systems. And as McGregor rightly observes: Let’s not forget – improving software quality is the problem we’re all trying to solve here.