W3C home > Mailing lists > Public > xml-dist-app@w3.org > July 2002

Re: Application or Infrastructure? (was FW: LC Comments: Web Method F eature)

From: Paul Prescod <paul@prescod.net>
Date: Tue, 09 Jul 2002 06:23:20 -0700
Message-ID: <3D2AE3C8.5033D66F@prescod.net>
To: "Champion, Mike" <Mike.Champion@SoftwareAG-USA.com>, "'xml-dist-app@w3.org'" <xml-dist-app@w3.org>

"Champion, Mike" wrote:
> > We are not trying to stop  people from
> > solving problems. We are trying to encourage them to solve
> > them *in the most interoperable way*.
> Understood.  Still, one sometimes get the impression that REST advocates
> see interoperability as the most important criteria that must
> trump all others.

In W3C working groups.

> Here's a real world example:  Some geographically dispersed
> organization needs to regularly exchange huge files between far-flung
> sites.  They *could* do this over the web, but there is enough
> intrinsic unreliability in the lower levels of the networks that HTTP
> or FTP require a lot of operator intervention or inefficient retries
> to get the job done.  So, they use a proprietary message queuing system
> that takes care of things more efficiently in terms of both human
> time and network bandwidth. 

Wonderful. I'm glad that they have a solution to their problem. But I
don't think it has anything to do with xml-dist-app@w3.org. They are
using a proprietary protocol running on proprietary software and solving
their problem without our help. I don't see how a standards body *could*
help them if their software already does what they need and
interoperability is not a problem.

> ....
> More generally, if one treats HTTP or FTP as the highest-level protocol,
> then one is saying that the application code (or human operator) is supposed
> to take care of the details of confirmations and retries.  

Retries are by definition handled by software. Whether they are handled
by *application* software depends upon how your software toolkit divides
the world into application and toolkit. If I were implementing a system
that repeatedly used reliability extensions to HTTP, I would wrap those
up in a library the second time I used them.

> ... It's nice that
> proper use of REST principles guarantees that GETs are safe and PUTs
> idempotent,
> but that's still something that the application layer has to deal with. The
> appeal of SOAP (or proprietary) messaging is that this grunt work can be
> shoved down into the *infrastructure*.

I do not see anything in SOAP that automates this to any greater extent
than HTTP. To get reliable delivery in SOAP you must define a mandatory
extension header and both ends of the software must implement it. To get
reliable delivery in HTTP, you can do the same thing (or do it at the
application level, whichever is most convenient).

Furthermore, if we standardize the HTTP header then it can be used on
the ordinary web as well as the "services" web.

> Also, maybe I'm missing something: If MQ Series or some similar system
> supports read, write, update, and delete of arbitrary chunks of data,
> why isn't this RESTful?  

It may well be RESTful, but that's an issue for rest-discuss. IMHO, it
is only interesting to xml-dist-app when it becomes an interoperability
Come discuss XML and REST web services at:
  Open Source Conference: July 22-26, 2002, conferences.oreillynet.com
  Extreme Markup: Aug 4-9, 2002,  www.extrememarkup.com/extreme/
Received on Tuesday, 9 July 2002 09:24:04 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 22:01:20 UTC