Abstraction and interoperability (was RE: Straw-man Proposal for our mission statement)

> -----Original Message-----
> From: Yaron Y. Goland [mailto:ygoland@bea.com]
> Sent: Monday, May 19, 2003 6:50 PM
> To: Assaf Arkin; Jean-Jacques Dubray
> Cc: 'Burdett, David'; Daniel_Austin@grainger.com; 
> public-ws-chor@w3.org
> Subject: RE: Straw-man Proposal for our mission statement
> 
> 
> 
> The only way to 'abstract' away dependency on 
> something is to completely re-invent the thing being depended on and then 
> define how your re-invention maps to the original. This is an extremely 
> expensive process that causes significant harm to interoperability 
> and should only be undertaken when there is no other choice.

I think that's an important consideration and that there are definitely
tradeoffs. On the other hand, the cost of NON-abstraction is fragility --
things can break when "insignificant" changes are introduced.  For example,
XSLT very wisely IMHO builds on an abstraction of XML, the "XPath data
model" (a precursor to the Infoset) so that the user can ignore
"insignificant" syntactical differences in the input document (such as
<empty attr='val' /> vs <empty attr="val"></empty>, which are identical as
far as XPath and DOM and the Infoset are concerned).

> The 'abstractions' 
> introduced between WSDL and SOAP have caused so much interoperability 
> pain that two different organizations had to be formed to sort out the 
> resulting mess. 

Could you elaborate on how the abstractions between WSDL and SOAP caused
interoperability problems?  And what organization besides WS-I had to be
formed to deal with them?

> What we need is a little less abstraction and a lot more 
> interoperability.

For what very little this observation may be worth, I think the worst
mistake we made when designing the DOM API was *not* abstracting away the
different target programming language's conception of the fundamental data
structures. DOM invented its own types, which forced the user to do the
mapping between, for example, a DOM NodeList and the Java collection types.
In 20:20 hindsight, I think that defining a mapping from some abstract
NodeList interface and the Java List type in the language binding would have
been a better idea than asking implementers to invent (and users deal with)
a DOM-specific class. DOM applications are *theoretically* interoperable,
but as a practical matter the user community and implementations are
fragmented across alternatives such as JDOM that *do* put the Java types
into the DOM core.
 
Others would do it by just giving up on the abstraction and interoperability
altogether (e.g. JDOM is defined with classes rather than interfaces, and
makes no bones about being Java specific rather than language-neutral).  I
think that's basically what BPEL is doing, e.g. by defining their own
concrete execution language rather than dealing with abstractions and
mappings and bindings, which definitely *are* harder to do properly and are
difficult for ordinary business programmers to grasp.  But for better or
worse, BPEL has grabbed the "easy" space, and we (if we choose to grapple
with it) are left with the harder problem of finding the conceptual
similarity in diverse implementation approaches and developing bindings. 

Finding the appropriate level of abstraction is probably the hardest but
most universal problem in software design. If nobody has cited this article
http://www.joelonsoftware.com/articles/LeakyAbstractions.html yet, I
recommend it.  I'm not sure what it implies for the problem at hand, but I
doubt if there is a simple answer.

Received on Monday, 19 May 2003 20:08:15 UTC