Re: whenToUseGet-7 Why call it WEB Serivces? (was: RE: FW: draft findings on Unsafe Methods (whenToUseGet-7))

> Maybe, as some believe, REST will be fundamental to achieving ad hoc
> application interconnectivity.  Maybe REST will just be a piece of the
> puzzle -- or maybe Web services will achieve its universal
> interconnectivity using different conventions such as UDDI and WSDL.
> Regardless, I think there is a clear sense in which the term "Web" is
> appropriate.  BTW, and I know this is controversial:  I prefer to view
> REST as a means of achieving the Web's goals, not as a defining
> characteristic of the web.

I don't like the way that REST is sometimes advocated, mostly because
I hate it when people use the terminology that I created to explain
this stuff as some sort of mandate for a particular architecture.
The first three chapters of my dissertation clearly indicate why there
is no such thing as a universal, best-fit architecture.

REST is an architectural style that models system behavior for
network-based applications.  When an application on the Web is
implemented according to that style, it inherits the interconnectivity
characteristics already present in the Web.  REST's purpose is
to describe the characteristics of the Web such that they can be
used to maximum advantage -- it does not need to define them.
REST isn't supposed to be a baseball bat; it is supposed to be a
guide to understanding the design trade-offs, why they were made,
and what properties we lose when an implementation violates them.

If "Web Services" truly used URI to identify services, as in allowing
no other identifier for the service to exist within the envelope
outside of the target URI, and it properly reflected the semantics
of responses to that service within whatever application-layer protocol
it uses as its delivery binding, then it wouldn't be a danger to the
other systems that already use those application protocols correctly.
That doesn't mean it would be great, but then at least it wouldn't
actively cause harm.

This bears repeating: The difference between an application-level
protocol and a transport-level protocol is that an application-level
includes application semantics, by standard agreement, within
the messages that the protocol transfers.  That is why HTTP is called
a Transfer protocol.  It is impossible to conform to an
application-level protocol without also conforming faithfully to
its message semantics.

This DOES NOT mean that I expect SOAP to use GET when accessing services.
I have never once said that this was a requirement for the HTTP binding.
POST is every bit as much a part of HTTP (and REST) as GET.  It is
necessary for a resource to be able to respond to GET appropriately
in order for it to take advantage of the interconnectivity inherent
in the Web, but it is not necessary for all such services to be
interconnected with the rest of the Web.  It would be nice and beneficial
for such services to be so, but it isn't necessary for the health of
the rest of the Web.

What is necessary for the HTTP binding of SOAP is that the Request-URI
used in a message identify the resource being accessed (which means
either the service, if that is the only resource, or data within the
service if it is acting as a namespace for resources).  The Request-URI
must not simply identify a generic HTTP-SOAP gateway mechanism that
does dispatching of the message contents according to some hidden
identifiers within the SOAP envelope, because doing so introduces
too many security holes.  Furthermore, it is necessary that the
messages be stateless (carry all of the semantics for each request
within the request message) and that when a failure or redirection
occurs, the appropriate HTTP response code be given in the HTTP
envelope, and also the appropriate cache-control mechanisms be
included -- not at random, or by fiat of some clueless gateway
interface, but by actually inspecting the SOAP response for the
corresponding semantics and mapping those back out to the HTTP binding.

Until it does that, the SOAP HTTP binding should not be issued by the
W3C as a Recommendation.  Using HTTP for the sole purpose of tunneling
other protocols through firewalls must be explicitly forbidden by
standards bodies, even if Joe Hacker may do it on a regular basis.
Liability for that kind of service rests with the software developers.
If anyone doesn't like that, then they can bloody well use a raw
TCP port 80 tunnel instead of HTTP -- they gain nothing from HTTP's
overhead if they do not obey its constraints.

As a *separate* issue, Web Services cannot be considered an equal
part of the Web until they allow their own resources to become
interconnected with the rest of the Web.  That means using URI to
identify all important resources, even if those URI are only used
via non-SOAP mechanisms (such as a parallel HTTP GET interface) after
the SOAPified processing is complete.  The reason to do so is solely
for the benefit of those Web Services; so that they can participate
in the unforeseen ways in which the Web allows resources to be shared.
That is the principle of the Web architecture that I absolutely refuse
to water-down for the sake of any technology.  However, it is also a
principle that SOAP, just like HTTP, can only encourage -- it is a
byproduct of good information design, rather than a mandate of the
protocol.  Mind you, it does require SOAP to be able to do things
like allow the SOAP sender tell the SOAP receiver about the URI.


Cheers,

Roy T. Fielding, Chairman, The Apache Software Foundation
                  (fielding@apache.org)  <http://www.apache.org/>

                  Chief Scientist, Day Software
                  2 Corporate Plaza, Suite 150   tel:+1.949.644.2557 x102
                  Newport Beach, CA 92660-7929   fax:+1.949.644.5064
                  (roy.fielding@day.com) <http://www.day.com/>

Received on Friday, 26 April 2002 20:24:57 UTC