RE: Myth of loose coupling

Agreed, the natural conclusion of this is that the first principle of
loose coupling is "ubiquity". If something runs readily on all platforms
legacy and my mother's typewriter (too), loose coupling code forms (as
in life forms) can survive and maybe thrive on top of it. Without such a
substrate, they are out of luck and die as so many good technologies or
concept did in the past. The second principle of loose coupling is cost.
If it costs too much ain't a loose coupling solution.

So XML is here, why go back? (I don't think that you are suggesting too,
I think you are pointing to the fact that the concepts behind XML -i.e.
the metamodel of XML- is the real value, and I totally agree).

If only we had already agreed on a common information model, we would
also have an ubiquitous semantic namespace that we could all use to
share data, information and knowledge...(sight).

The good (maybe the difference with other technologies) but at the same
time the bad thing about XML is that you can seamlessly carry data,
metadata (beyond the node names - e.g. Webber's BizCodes), processing
instructions, bits or complete process definitions (I can send you a
purchase order and the collaboration that I want you to execute to
process this purchase order with me)... XML has moved to the semantic
level the notion of a method call. No longer targets, methods and
arguments are separated. XML has unified the way we express that, of
course at a cost, forcing us to agree on semantics. Some people might
say this cost is too high and actually prevents us to achieve loose
coupling. I am ready to debate objectively on this. I think the best
approach would really be to go at the metamodel level, agree on all the
concepts that need to be in place and their relationships and then look
back at the existing technologies and map it back. Surely we could
conclude that the best way is to use HTTP facilities, or not, the
important thing is that it is much easier to agree at the metamodel
level than it is at the technology.

So personally, I see this debate over URL encoded method calls as a bit
childish, get, post, update, whatever REST, UNREST... But how many fax
machine would you need if your fax machine could understand what is a
purchase order, an urgent message for the boss, some spam, ...

IMHO, the vision of web services is to enable just that, build a
metadata framework that enables me to process/route/transform every bit
of message that hit my organization (whether it is an internal or
external message). The challenge is of course to make it as simple as
possible such it also becomes an "ubiquitous" technology without
creating a mess or should I say a hell in this case. So there might be a
few verbs to this ubiquitous fax machine because we need to agree on a
basic namespace to relate to the nature of an incoming message,
personally I don't have a strong opinion either way. However, the only
thing I know is that it'd better enable loose coupling !! or else it
will become another CORBA or DCOM.

When you say:
>> I want to
>> decouple the service definition from the actual implementation, so I
can
>> gain considerable benefits.

Actually, we are passed beyond that point, as the notion of method calls
blurs so it the implementation of the method call. Web services is not
so much about hiding the implementation, it is rather to completely
change the notion of implementation. Web services are about specifying
an action (very close to the semantic level) and let decide the
implementor of this action how it is going to deal with it. This is why
I am always very sad when the debate is focusing on modeling "APIs".
APIs don't exist anymore in the web services space. If all one want to
achieve is add an XML layer to your xxxCI, one is missing the point (see
my argument about pipeline architecture below too).

At this point, you must be thinking about why is this guy talking so
much about fax machines? Well to me it is the prototypical example of
loose coupling. It too relies on a small set of ubiquitous technologies
and protocols... 

JJ-

On another note you say (and this is another debate I think):
>> Am I forever destined to use IIOP or RMI? Or can I also consider
using 
>> XML for these scenarios?

>> One approach would be to say: XML is only good for loosely coupled 
>> services, for everything else use IIOP/DCOM/DCE/whatever. 

If you look at the struggle it is today (2003) to have a valid
abstraction of what is a "business object"? and what is the best way to
spread it over different tiers. I contend that the only thing we know
today is that it is not an "objet", i.e. represented by a single class.
Business objects do have very specific properties that make XML a good
fit to represent the data they contain. Just like there is JDO/ADO
today, there will be one day or another XDO simply because XML is a
superior technology to represent datasets and maps to ER far beter than
object graphs (!? -think of a multilevel bill of material), because
aggregation of heterogenous result sets is trivial, because XML travels
better up to the presentation tier, because XML has a far better
metadata driven datatyping/validation framework, because business object
need also to travel beyond the presentation layer,... So my guess is
that web services are going to be a mainstream technology, so will be
business process technology, ultimately that will bring so many
constraints on the architecture of business applications that the
natural evolution will be to use XML data streams all the way from and
too the database. Only then will we have a business object concept that
is easy to implement and use. Ultimately, the real paradigm shift is not
between Get+URI and Post+action it is in the fact that data, method
invocation and code is unified under one umbrella and that this becomes
a "stream" that is processed in a pipeline architecture.

>>-----Original Message-----
>>From: Assaf Arkin [mailto:arkin@intalio.com]
>>Sent: Wednesday, January 08, 2003 8:44 PM
>>To: Jean-Jacques Dubray; edwink@collaxa.com; 'David Orchard'; 'Mark
>>Baker'; 'Ugo Corda'; 'Champion, Mike'
>>Cc: www-ws-arch@w3.org
>>Subject: RE: Myth of loose coupling
>>
>>> There is one point that I did not see brought up into the discussion
(I
>>> may have missed it too): "well formed" versus "valid" XML.
>>>
>>> HTTP enables loose coupling in the sense that it is ubiquitous,
>>> everybody has a system that can listen in. But from a pure technical
>>> perspective it is not as less coupling as CORBA or DCOM.
>>>
>>> The real difference between traditional "brittle" technologies and
XML
>>> technologies (XML, XSLT, XPATH amongst other) is that now one can
think
>>> building systems were the content of a message does not need to be
>>> understood in its entirety in order to take action. This is very
>>> different from the object world where you have to know a whole class
or
>>> interface structure before one can interact with the data set.
>>
>>I am not sure that the XML approach is technolgically superior to the
>>IIOP/CDR approach. Afterall, you do have a uniform way to represent
the
>>data
>>(CDR) and you can propose tools that perform the same functionality as
>>with
>>XML: transformation, extraction, even human-readable representation.
>>
>>Where XML seems to excel over IIOP, DCOM, TxRPC, RMI, DCE, IDE and a
>>variety
>>of other protocols/formats is in the network effect. You have cheap
(free)
>>accessible technology to transform a document or extract pieces of it.
>>Even
>>if it was invented for styling HTML pages, you can use it to transform
>>invoices. At the very least you can edit a document using NotePad or
vi.
>>
>>You can have the same set of services for IIOP, but they are not
readily
>>accessible, they don't work for DCOM or IDE, and usually they are task
>>specific, so you can look at IDLs and network packets, but you can
look at
>>content or persisted data.
>>
>>I wouldn't say IIOP is brittle, I would just point out that the
network
>>effects plays to the advantage of XML is giving an overall better
value
>>proposition with a ready made solution to many of the problems that
with
>>IIOP would actually require you to buy or develop very specific
solutions.
>>
>>
>>> Another aspect of XML technologies it enables developing systems
that
>>> can serve up data in the format that the consumer of the information
>>> want. This enables another degree of loose coupling by reducing the
cost
>>> of implementation of a server that can interact with a large variety
of
>>> clients.
>>
>>You can build a service that creates data in the form of an object
model
>>given in Java. You can then serve it in binary form (e.g. CDR) or in
>>textual
>>form (e.g. XML). You can have multiple CDR representations, you just
need
>>a
>>toolkit that lets you support multiple representations. You can have
>>multiple XML representations, again you need a toolkit that lets you
>>support
>>multiple representations. I don't know where you could get the former,
but
>>I
>>know where you can download multiple tools to do the later.
>>
>>Because XML transformation tools are readily available, and they all
>>support
>>the same transformation languages, and there are hundred of books and
>>articles telling you how to use them, that's something we do on a
daily
>>basis. Not something we do with IIOP. So you get XML deployed in more
>>places, which results in more tools and more information on how to use
>>this
>>tools, which further increases the network effect.
>>
>>That's actually how we got to be here in the first place. It wasn't
the
>>superiority of HTTP/HTML/XML over other solutions, but the ubiquity of
it
>>all.
>>
>>
>>> (I know these two statements are trivial however they must be part
of a
>>> discussion on loose coupling).
>>>
>>> All these technologies are completely standard, available on all
>>> platforms. This is why you can get loose coupling if you want too
(it
>>> has a cost of course). It is too often that I see the web services
>>> discussion entrenched into modeling an API. What is such a big deal
>>> about modeling an API? What are you going to get that you can't get
at
>>> looking at a Java or C# class? All this is metadata.
>>>
>>> It is time to use XML for what it is really useful for: carrying
complex
>>> data sets, semantically accessible (as opposed to structurally
>>> accessible), this approach has tremendous benefits when connecting
100s
>>> to 1000s of entities together (how can you do it without loose
>>> coupling?).
>>
>>The question that one would ask is: if XML is so great why can't I use
it
>>everywhere?
>>
>>On the one hand I have these loosely coupled services, where I don't
want
>>to
>>work at the API level. I want to carry complex data sets that are
>>semantically accessible. Like purchase orders and invoices. I want to
>>decouple the service definition from the actual implementation, so I
can
>>gain considerable benefits.
>>
>>On the other hand I have these tightly coupled services. Two
components in
>>the same system, two separate layers in the same stack that are
decoupled
>>by
>>some low-level interface. Am I forever destined to use IIOP or RMI? Or
can
>>I
>>also consider using XML for these scenarios?
>>
>>One approach would be to say: XML is only good for loosely coupled
>>services,
>>for everything else use IIOP/DCOM/DCE/whatever. Another apporach would
be
>>to
>>say: services should only be loosely coupled. The concept of
distributed
>>components does not exist outside the domain of network interactions.
Your
>>services are either loosely coupled, or they do in-process calls. Yet
>>another approach would be to say: go right ahead and use XML wherever
you
>>see fit, even if all you are doing are out-of-process calls to two
>>components that are part of the same application (at least IIOP and
DCOM
>>let
>>you do that).
>>
>>I personally believe that XML would benefit more for the network
effect,
>>and
>>the network effect would greatly increase, if we could use XML
everywhere.
>>Between loosely coupled systems that exchange data that is
semantically
>>accessible, and between components that are tightly designed but
separated
>>by boundaries of their containers. In some cases I worry a lot about
>>having
>>the proper form of abstraction, in other places I worry a lot about
not
>>over
>>engineering the solution with too much abstraction. In both cases I
use
>>XML
>>for its obvious benefits without passing judgment that one approach is
>>radically differen from the other.
>>
>>arkin
>>
>>>
>>> I wrote a couple papers back in 1999 on the subject in case anyone
is
>>> interested to read:
>>>
>>> An eXtensible Object Model for Business-to-Business eCommerce
Systems
>>> (short)
>>> http://jeffsutherland.com/oopsla99/Dubray/dubray.html
>>>
>>> Business Object Modeling: An XML-based approach
>>> http://www.odx.it/doc/businessobjectmodeling.pdf
>>>
>>> Cheers,
>>>
>>> Jean-Jacques Dubray____________________
>>> Chief Architect
>>> Eigner  Precision Lifecycle Management
>>> 200 Fifth Avenue
>>> Waltham, MA 02451
>>> 781-472-6317
>>> jjd@eigner.com
>>> www.eigner.com
>>>
>>>
>>>
>>> >>-----Original Message-----
>>> >>From: www-ws-arch-request@w3.org
[mailto:www-ws-arch-request@w3.org]
>>> On
>>> >>Behalf Of Edwin Khodabakchian
>>> >>Sent: Monday, January 06, 2003 11:22 PM
>>> >>To: 'David Orchard'; 'Assaf Arkin'; 'Mark Baker'; 'Ugo Corda';
>>> 'Champion,
>>> >>Mike'
>>> >>Cc: www-ws-arch@w3.org
>>> >>Subject: RE: Myth of loose coupling
>>> >>
>>> >>
>>> >>Dave,
>>> >>
>>> >>> There are kind of 2 different definitions of loose coupling:
>>> >>> 1) Changes to the interface do not affect software
>>> >>> 2) Changes to the software do not necessarily affect the
interface.
>>> >>>
>>> >>> I'm focusing on #1, not #2.
>>> >>
>>> >>Then you are correct.
>>> >>
>>> >>> I also 100% agree with having "coarse-grained" or
>>> >>> "document-oriented" web services.  But I don't think this has
>>> >>> much to do with loose coupling.
>>> >>
>>> >>I agree that coarse-grained has much to do we
>>> >>performance/reliability/efficiency/latency. My definition of
>>> >>coarse-grained though is more about combining multiple operations
into
>>> >>one call and passing in as much data as possible ( a.foo( A );
a.goo(
>>> B
>>> >>) -> a.foogoo( A union B ).
>>> >>
>>> >>My point was slightly different here:
>>> >>Imagine that Verisign is publishing a MerchandRegistration service
to
>>> >>allow merchands to register to their online payment service.
>>> >>Registration is slightly different for merchants that want to
offer
>>> Visa
>>> >>only compared to merchants that want to offer Visa + Mastercard.
There
>>> >>are difference in the XML form that needs to be submitted and the
back
>>> >>end process Verisign goes through.
>>> >>
>>> >>DESIGN OPTION A
>>> >>As a developer, I can decide to design the application as one
service
>>> >>with 2 operations:
>>> >>Operation #1: processVisaRegistration( xmlVisaOnlyForm )
>>> >>Operation #2: processVisaAndMasterCard( xmlVisaAndMasterCard )
>>> >>This is what most developers using today's web services toolkit
would
>>> be
>>> >>encouraged to do.
>>> >>
>>> >>DESIGN OPTION B
>>> >>As a developer, I create 2 Web Queue resources with a generic
>>> interface:
>>> >>WebQueue #1: https://www.verisign.com/payflow/visaOnly
>>> >>WebQueue #2: https://www.verisign.com/payflow/visaAndMasterCard
>>> >>Get on those resources returns the meta information regarding the
XML
>>> >>data required by each queue. POSTing the correct XML document
>>> initiates
>>> >>an asynchronous process.
>>> >>
>>> >>Let's imagine that Intuit is using versign service for visaOnly
and
>>> eBay
>>> >>is using Verisign for visaAndMasterCard.
>>> >>
>>> >>Let's imagine that a new regulation comes and the
visaAndMasterCard
>>> XML
>>> >>data structure changes and you need to deploy a new version of the
>>> >>service. [Note: given that both services are asynchronous you
cannot
>>> >>simply overwrite the old version. You need side by side versioning
for
>>> a
>>> >>period of time.]
>>> >>
>>> >>In design A, this simple change impacts both Intuit and eBay. In
>>> design
>>> >>B only Intuit is impacted.
>>> >>
>>> >>This could be a design pattern or best practice that developers
could
>>> >>adopt. But please not that if Web service were forced to be good
web
>>> >>citizen and expose a generic interface, then developers would be
>>> >>constrained (good) to design applications using B. I believe that
it
>>> >>would also help a lot of developers while decomposing complex
>>> >>interactions into resources.
>>> >>
>>> >>Finally, as described in this example and Mike's previous expense
>>> report
>>> >>use case, I think that generic interface (REST) and extensible
>>> envelope
>>> >>and metadata (SOAP/WSDL) could be orthogonal rather than
conflicting:
>>> >>Web services could be built on top of RESTFul resources and still
>>> >>provide all the benefits SOAP Features and XML Schema provide. No?
>>> >>
>>> >>Edwin
>>>
>>>

Received on Thursday, 9 January 2003 08:41:21 UTC