Over the past few weeks, Dr Katy Ring, the editor of our sister paper Software Futures, has provoked debate by raising the question of whether the Object Management Group is answering the needs of the object market. Alan Pope last week stated the case on behalf of the Object Group (CI No 2,438) but Katy […]
Over the past few weeks, Dr Katy Ring, the editor of our sister paper Software Futures, has provoked debate by raising the question of whether the Object Management Group is answering the needs of the object market. Alan Pope last week stated the case on behalf of the Object Group (CI No 2,438) but Katy is firmly sticking to her guns.
I find it very interesting that of all the (some very heated) responses to my article, nobody (aside from those ringing in support), appears remotely interested in addressing the central argument of the piece, which concerns the marketing of the Common Object Request Broker Architecture. That silence itself might suggest that I am correct in thinking that the Object Group has some serious problems to address in terms of communicating its message to the wider market. But enough of the personal, lets get on to the professional task in hand, the rebuttal of the rebuttal. First and foremost my article was never intended to form part of that curious genre of information technology journalese gaining prevalence in the US which is widely referred to as the object-oriented backlash (and maybe the mighty wrath of the IT powers that be would be better directed at those writers and publications?) such as the recent lead article in Byte (see also July’s Wired). I completely believe the future of software development lies with object-oriented technology and I also think the success or failure of CORBA is of crucial significance in the take-up of that technology. It is because I believe CORBA to be of such central importance that I wrote the article in the first place. I also believe along with Alan Pope that people need to be well-informed about this technology and its place in the big picture.
And so to specifics: Alan says that by defining the difference between static and dynamic binding as the resolution of external reference by library look-up at compile or run-time I am misleading people. He goes on to cite the fact that some dynamic library implementations do not attempt resolution until the reference is used. As I understand it, this is an optimisation option within the dynamic invocation method and although I can see that in the pedantic sense, this challenges my definition, I don’t see that it really jeopardises it as a working definition for my readership (who on the whole are not at the bleeding edge of research, they simply have to buy and implement its results). However, for the greater good of clear, concise communication perhaps somebody could volunteer a better definition? Alan Pope then goes on to say that it was careless of me to equate dynamic marshalling to the dynamic model as dynamic marshalling is frequently desirable and done in a static invocation model. Well yes and no: as I understand it, dynamic marshalling is done by both static and dynamic binding as the static model has marshalling in the stub code, but it is generated by the Interface Definition Language compiler and fixed for each invocation. For example, in the Orbix product from Dublin-based Iona Technologies Ltd, Dynamic Invocation Interface stubs are passed into a Named-Value list and their marshalling deferred until Request : : invoke is called, while static arguments are marshalled directly. Alan then goes on to suggest that I have overstated the argument concerning the practicalities of implementing the Object Group’s Dynamic Invocation Interface for Object Request Brokers built on a static model. It seems that we are all in agreement that dynamic implementations are typically slower in performance as there is a large overhead involved in supporting Dynamic Invocation Interface. Steve Vinoski in the July-August 1993 edition of the C++ Report concluded that for an operation with no arguments and a void return type, the Dynamic Invocation Interface requires a minimum of two function calls, at least one of which may result in a RPC. There is also the overhead of the Dynamic Invocation Interface having to interpret the request, not to mention the bulky application code required to implement
this series of steps. For most applications, especially those written in a compiled language like C++, it is far more efficient to make requests through static IDL stubs than through the Dynamic Invocation Interface. Now this large overhead required to support Dynamic Invocation Interface involves a lot of coding to handle run-time interface parsing and type checking. This is not simply a distasteful job as Alan implies, some companies also find it a difficult one – particularly in terms of coding it in such a way that users do not create memory leaks.
Type called an ‘any’
What is more, its difficulty can be language-dependent, a fact that even the Object Group’s own CORBA spec can be inferred to hint at when it says: The nature of the Dynamic Invocation Interface may vary substantially from one programming language to another. And so we arrive at the argument surrounding IDL, C and C++. Alan puts forward his view as follows: the Object Group first specified a C mapping and are hard at work on a C++ mapping. C is a static language, as is C++. So, by definition, if one were to run the Dynamic Invocation Interface through the IDL compiler, one which generated a C mapping, the result is a static language representation on which to implement. The only ‘difficult’ part of such an implementation is in dynamically identifying parameters by their type. But this is easy in IDL because it pre-declares a dynamic type called an ‘any’.
OK, so we are agreed that C++ functions can have the same name but different parameters and it is only a small step from there to deduce that calling C programs from C++ is easier than calling C++ modules from C. It could also be argued that C is therefore an easier language in which to implement late binding and that if you go further down that path of reasoning, then mapping between C and C++ is difficult. Which brings us back to my original question, which was why oh why was the first specified IDL mapping to C? And is this a sensible departure point for CORBA? Alan argues that the fact that the first specified IDL mapping is to C makes no difference because of the dynamic type ‘any’. I beg to differ and I am not alone: talking of the Orbix architecture, Iona explains its compliance to CORBA and comments: Although ‘any’ and ‘TypeCode’ only receive short descriptions in the CORBA specifications, their implementation is complex and comprises a major part of the run-time source code. Hewlett-Packard Co’s Steve Vinoski goes further in his company’s A Review of the IDL C++ Mapping Submissions presented to the Object Group where he says: Hewlett-Packard is extremely disappointed that the ‘any’ mapping that we presented to the other submitters during the attempts to negotiate a single merged mapping has been completely ignored and left out of the final submissions. During these negotiations, Hewlett-Packard agreed to several changes to the ‘any’ mapping in order to accommodate the needs and wishes of some of the other submitters. Several submitters have since voiced the opinion that the Hewlett-Packard ‘any’ mapping is a far better approach than what is described in either of the final C++ mapping submissions. In a nutshell, the Hewlett-Packard ‘any’ mapping provides type safety. It prevents application developers from having to typecast void pointers and thus prevents run-time type errors. The added cost of this safety is minimal; indeed, it is no more costly than what a careful user must do to prevent run-time typecasting errors with the ‘any’ mappings proposed by the final submissions. Elsewhere Vinoski clarifies Hewlett’s position regarding the C, C++ compatibility issue, explaining: On the surface it appears that mapping object references to pointers in both C and C++ allows them to be freely exchanged between the two languages, but this is not the case. In fact the ability directly to exchange C++ pointer-style object references with C would require non-trivial changes to CORBA 1.1 C language mapping. The opinion is expressed that some would much rather
create a workable C++ mapping than cripple it for the sake of dubious C interoperability.
Common sense notion
Alan next brings us to my contention that the Object Group is attempting to arbitrate between what are beginning to look like increasingly incompatible Object Request Broker technologies built in C. In my defence I refer to the quote above and to Iona’s contention that the C language binding in CORBA 1.1 is cumbersome and not particularly easy to use. Alan is correct where he says that the mapping does not dictate either the language used to implement an Object Request Broker nor the language used to implement an Object Request Broker client. However, it seems a common sense notion to presume that if a library is provided in C that its clients will also be written in C and to date CORBA specifies C functions, not C++ functions. What is more, in its own specifications for CORBA the Object Group appears to recognise that a C interface is not intuitive to an object-oriented environment where it says: The most natural mapping would be to model a call on an Object Request Broker object as the corresponding call in the particular language. However, this may not always be possible for languages where the type system or call mechanism is not powerful enough to handle Object Request Broker objects. In this case, multiple calls may be required. For example, in C, it is necessary to have a separate interface for dynamic construction of calls, since C does not permit discovery of new types at run-time. To suggest, as Alan does, that I do not understand the Object Group process of technology adoption has to be a joke since I spent a great deal of time watching the process for CORBA 1.0 and can reference previous articles to that effect. I am well aware that submissions to the Object Group are based on existing implementations (though not necessarily commercially available implementations). So while I take his point that if no one has an implementation of an IDL to Cobol mapping, then a Cobol mapping is not made a part of the specification, one of the main points of my article argued that we now do have C++ mappings but they do not operate very efficiently if they have to embrace the existing IDL C mapping. I also mentioned in my article that the weaknesses of CORBA 1.1 are being addressed in CORBA 2.0 but argued that it is taking the Object Management Group too long to get version 2.0 out of the door. And I will not accept the argument that there are no Object Request Broker implementations offering functionality such as security and event handling because there are – the DOME Distributed Object Management Environment from Object Oriented Technologies Ltd is one example that springs to mind.