W3C home > Mailing lists > Public > xml-dist-app@w3.org > July 2002

Application or Infrastructure? (was FW: LC Comments: Web Method F eature)

From: Champion, Mike <Mike.Champion@SoftwareAG-USA.com>
Date: Mon, 8 Jul 2002 18:03:59 -0600
Message-ID: <9A4FC925410C024792B85198DF1E97E403800EDB@usmsg03.sagus.com>
To: "'xml-dist-app@w3.org'" <xml-dist-app@w3.org>

> -----Original Message-----
> From: Paul Prescod [mailto:paul@prescod.net]
> Sent: Monday, July 08, 2002 6:53 PM
> To: gtn@rbii.com; 'xml-dist-app@w3.org'
> Subject: Re: FW: LC Comments: Web Method Feature

> We are not trying to stop  people from
> solving problems. We are trying to encourage them to solve 
> them *in the most interoperable way*.

Understood.  Still, one sometimes get the impression that REST advocates
see interoperability as the most important criteria that must
trump all others.  

Here's a real world example:  Some geographically dispersed 
organization needs to regularly exchange huge files between far-flung
sites.  They *could* do this over the web, but there is enough
intrinsic unreliability in the lower levels of the networks that HTTP
or FTP require a lot of operator intervention or inefficient retries
to get the job done.  So, they use a proprietary message queuing system
that takes care of things more efficiently in terms of both human
time and network bandwidth. Between organizations, I can fully agree 
that interoperability almost always trumps efficiency, but this is a 
hard sell for EAI, B2B, etc. installations with a less open network.
After all, given all the work these folks do to make sure that only 
authorized people and trusted programs can communicate over their systems,
not all that much more trouble to ensure that compatible software is

More generally, if one treats HTTP or FTP as the highest-level protocol,
then one is saying that the application code (or human operator) is supposed
to take care of the details of confirmations and retries.  It's nice that
proper use of REST principles guarantees that GETs are safe and PUTs
but that's still something that the application layer has to deal with. The
appeal of SOAP (or proprietary) messaging is that this grunt work can be
shoved down into the *infrastructure*.    

Also, maybe I'm missing something: If MQ Series or some similar system
supports read, write, update, and delete of arbitrary chunks of data,
why isn't this RESTful?  If an application uses some SOAP headers to 
ask for a reliable transport (which could be guaranteed with hardware,
HTTPR or whatever, or a proprietary protocol) rather than insisting that 
the reliability be the responsibility of the application, why is this a 
Bad Thing?  OK, it's not widely interoperable with systems that don't 
understand those headers, but what if the user doesn't care?  

Could we establish a modus vivendi "Use REST principles when you care about
interoperability over the Web, use whatever works when you don't?"   Or
"use REST principles to get scalability on an unknown infrastructure, use
the capabilities that you already have if you paid the big bucks to  get
scalability". If so, SOAP offers more flexibility than REST needs, 
but no more  than some of these other use cases require.   

Received on Monday, 8 July 2002 20:04:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 22:01:20 UTC