RE: Visibility (was Re: Introducing the Service Oriented Architec tural style, and it's constraints and properties.

> -----Original Message-----
> From: Mark Baker [mailto:distobj@acm.org]
> Sent: Saturday, March 01, 2003 6:43 PM
> To: Champion, Mike
> Cc: www-ws-arch@w3.org
> Subject: Re: Visibility (was Re: Introducing the Service Oriented
> Architec tural style, and it's constraints and properties.
> 



> But an interaction is invisible if its semantics are other than those
> expected by the component when examing the message for the action (in
> HTTP, the method in the request line).  A SOAP message that says
> "getStockQuote" has "getStockQuote" semantics, and is therefore
> invisible to SMTP, HTTP, FTP, etc.. intermediaries that have no prior
> knowledge of stock quotes.  

And GET http://www.quotes-be-us.com/ibm is opaque to JMS implementations.
That's why SOAP exists to allow SOAP intermediaries to work the same across
protocol bindings.  They tunnel over HTTP, FTP, SMTP, JMS impelementation,
MQ and other proprietary stuff with equal oblivion.    

> 
> The important point for the purpose of this discussion being, that in
> not all cases are the semantics of the message maintained.  And that's
> assuming that there's even a semantically similar method with which to
> do any bridging (i.e. HTTP POST <-> SMTP DATA); if there 
> isn't, then in
> no case are the semantics the same, and indeed, most multi-protocol
> bridges or routers should fault in that case.

Huh?  The SOAP paradigm gives a single framework -- XML, headers,
intermediaries -- that provides a place to put semantically meaningful
information in the content of a message that is equally meaningful
irrespective of the transport layers it exploits, tunnels, or whatever.  The
use case is precisely the fact that it is hard to map semantically important
information from one protocol-specific form to another, so putting the
semantics in the content rather than relying on "bridging" the protocols
makes lots of sense. 

Let's take this down a few levels of abstraction and just talk about
personal opinions on best practice.  I (personally, not speaking for
employer or WSA WG, blah blah blah) find very little to disagree with in
Paul Prescod's analysis in two articles on XML.com a year or so about the
Google API, the value of UDDI, etc.  If one is "on the Web" and dealing with
abstract "resources" such as "the set of all Web pages that contain the
words 'madness", 'king', and 'george'" or "the interface definitions of the
web services at http://www.example.com/impeachMadKingGeorge" then it makes a
lot of sense to GET and PUT representations of these resources much as
Fielding or Prescod recommend.  SOAP doesn't add much besides complexity and
IDE-friendliness to Google, AFAIK.

On the other hand, there are plenty of "services" that are not necessarily
"on the web" but involve actual atoms out there in the real world rather
than just electrons moving from one computer to another. There are plenty of
times that you have to go across multiple protocols or invoke proprietary
code that knows nothing of URIs, representations, and HTTP.  There are
plenty of projects out there that have security, reliability, routing, etc.
requirements that overwhelm HTTP's capabilities, etc.   Prescod, Fielding,
et al. don't really address these kinds of cases, or at least not in a way I
find persuasive.  Maybe (as Walden in particular has argued on this list)
one *could* build RESTful, idempotent interfaces to this complex, demanding,
and legacy stuff, and maybe y'all will demonstrate this and convince us all.
But, again AFAIK, such demonstrations remain hypothetical.  Real people are
finding the SOAP paradigm very powerful for bridging applications, systems,
and protocols in an efficient and useful way.  

So, REST has its uses, SOA has its uses, SOAP and WSDL have their uses.  WSA
WG is trying to clarify the architectural issues, e.g. David Orchard's work
to desribe the SOA architectural style in a way that meshes with Fielding's
description of the REST architectural style.  We hope that this will result
in some defensible best practice prescriptions  UNDER SPECIFIC CONDITIONS
based on both an understanding of what works today and what should work
tomorrow if the theoretical framework is correct.  It will almost certainly
NOT result in a conclusion that either SOAP or SOA or REST or anything else
is the salve for all pain. Those who try to argue that one of these *do*
solve the world's problems better / faster / cheaper are going to have an
uphill battle to establish credibility at this point.   

So, back to "visibility."  I think we have a pretty good outline of the
costs, benefits, and tradeoffs pertaining to visibility in different
scenarios.  It's quite clear that in the typical Web environment of today,
when security / reliability requirements are not terribly onerous, the
services under consideration involve moving information representations
around, and mature firewall technology is in place, then most of what you
say about HTTP and visibility is true and desireable.  Ratchet up the
security/reliability requirements and put some complexity on the back-end,
however, and it's clear that the Web as we know it is not a great platform
for web sevices without the kind of help that the SOAP-based specs offer.
All that "visiblity" becomes a liability (as Roger has helped us understand
from the IT perspective).  The SOAP framework and the numerous standards and
proposals that work within it really do add value for the people working in
these areas.  There are costs, of course -- XML/SOAP/WS-Security-aware
firewalls are clearly going to be more expensive in dollars and resource
requirements than vanilla HTTP-aware firewalls are.  This is neither a Good
Thing nor a Bad Thing, just one more tradeoff that Web services architects
are going to have to take into consideration.  
 

Received on Saturday, 1 March 2003 19:57:16 UTC