W3C home > Mailing lists > Public > www-tag@w3.org > May 2002

Re: whenToUseGet-7 Why call it WEB Serivces? (was: RE: FW: draft findings on Unsafe Methods (whenToUseGet-7))

From: <noah_mendelsohn@us.ibm.com>
Date: Tue, 7 May 2002 14:36:38 -0400
To: "Roy T. Fielding" <fielding@apache.org>
Cc: www-tag@w3.org
Message-ID: <OF16B64DF6.76860C58-ON85256BB2.005E5774@lotus.com>
Roy: 

This message has been in my "inbox" for sometime, and now finally I have 
the time to respond.   There is a great deal in it with which I agree, and 
I very much appreciate the constructive tone.  Rather than intersperse my 
comments, let me briefly summarize what I believe you're saying (always a 
useful way of uncovering misunderstandings) and my responses.  Most of the 
following are paraphrases of what I take to be your positions. 

Roy:  The most fundamental point is that Web resources MUST be identified 
by and accessible using URIs.
Noah:  Agreed.

Roy:  SOAP is perceived as not supporting or properly encouraging this, 
and the canonical RPC examples flat out violate the rule.
Noah:  Agreed on the RPC, as I've previously acknowledged.  This is worth 
a serious effort to fix, and we've seen several proposals.  None change 
any of the core SOAP mechanisms.  Use of GET would require enhancement to 
the HTTP binding...just using URIs to identify resources would require no 
changes to what's been drafted, IMO.  Note also that RPC is optional in 
SOAP.  I believe that core SOAP is just fine in its use of RPC's to 
identify and access resources. What's needed, in addition to fixing RPC 
is:  (1) educating those who build applications and especially app-dev 
tools to properly use URIs', including for dynamically created resources 
and (2) to make sure that the description mechanisms, such as WSDL, can 
appropriately describe such uses of URI's. 

Roy:  That said, you can't force people to do the right thing, only 
encourage and make it practical.
Noah: Agreed.  Also, there will be some practical limits.   In the end, 
certain sub-addressing will be done in application-specific ways, as is 
common with web forms today.

Roy:  Responding to GET isn't a religion.  It's a practical way of (a) 
making the resource useful to consumers that have only general knowledge 
of the existence resource...which is key to the spirit of the web and (b) 
even when the access is known to be through SOAP, there's a lot of 
deployed infrastructure in the network that knows how to optimize get.
Noah:  Agreed, if I've understood you right, but the other side of the 
coin needs some emphasis too:  I suspect that in web services scenarios 
there will be many resources, typically dynamically created ones with 
short lifetimes in the middle of transactions, which are of interest 
primarily to a small number of consumers that have specialized knowledge 
anyway.  Many such resources will be protected by encryption, 
authentication, or audit-trail logging which will limit the ability to do 
"safe" access anyway.  Get adds limited value to these:  I think they will 
be more common in Web services scenarios than has been traditional with 
the relatively long-lived resources on the Web.   As you imply, Get isn't 
a religion, it's a tool.


Roy:  There is significant added value if SOAP HTTP resources will respond 
to non-SOAP requests, particularly GET
Noah:  Agreed for resources that are long-lived, and likely to be 
accessible to more than one application anyway.  That will be true of many 
SOAP resources, and I think we've done nothing to preclude it.   As noted 
above, I think that there will be many other resources in web services 
scenarios for which such access will be rarely if ever of value.  I'd 
leave it to application designers (and tool vendors) to decide which ones 
make the cut. 

Roy: fooling firewalls that don't know about SOAP is (usually) the wrong 
reason to use HTTP for SOAP.
Noah:  Agreed.  I've never supported that "fool-the firewall" position. 
Firewalls are there precisely to know what is going on, so they can 
appropriately filter.  That said, I think that SOAP can be deployed in 
many ways, and for many purposes.  Among those is as a somewhat more 
structured replacement for mechansisms like CGI and Servlets.  With those 
existing mechansisms, you'd be nuts not to have good coordination between 
the configuration of your firewalls and the capabilities that you expose 
through your CGI/Servlets.   therefore, I don't think that sending SOAP 
traffic through port 80 is in all cases a mistake.  In any case, the HTTP 
binding allows you to use any port that you like, and I believe we have a 
health warning in the security section also.

Roy: SOAP should use the mechanisms of HTTP as they are intended to be 
used.  You list a number of specifics such as cache control, etc.
Noah:  Agreed , at least within practical limits.  As Mark Baker has 
mentioned several times, the workgroup has gone to significant lengths to 
try to get the details right.  Just as an example, there has been a 
significant effort to use HTTP status codes that are appropriate.   On the 
other hand, there are situations in which a SOAP node is known to be 
talking to another SOAP node.  In these cases, I don't think it's 
necessary to use every HTTP feature, just not to misuse any.   So, for 
example, if we know that a SOAP application must never fail in a certain 
way, then we can decline to ever send the corresponding HTTP status code. 
If a receiver receives that code, it can infer that it is either talking 
to a buggy SOAP implementation, or that the message was somehow rewritten 
in transit. 

Roy:  SOAP shouldn't be made a recommendation until the HTTP details are 
right.
Noah:  I think we need to separate GET from the rest of the discussion. 
The core, non-RPC parts of SOAP are already OK in their use of HTTP, or 
else very close, IMO.  Tim Bray and I have each proposed similar 
approaches to the RPC problem, neither of which involves changes to the 
SOAP drafts.  If we want to support GET, then the HTTP binding needs to be 
enhanced.  I am in favor of doing this, and I have some optimism that it 
could be spec'd out quickly.  Here too, we've seen some proposals, 
including one from me.  That said, I share David Orchard's opinion that 
would we have is in shape to go out if there turn out to be delays or 
disagreement with publishing a SOAP direction.  The HTTP binding is 
layered from SOAP in somewhat the way that a device driver is layered from 
Unix.  Particularly if we signal to the community that a GET binding is 
coming, I think we can roll out what we have and support GET as soon as 
the details are in place.   I would also encourage the WSDL group to think 
through REST and GET issues.

Again, many thanks for your patience with this issue.  If you're in 
Hawaii, I look forward to seeing you there.


------------------------------------------------------------------
Noah Mendelsohn                              Voice: 1-617-693-4036
IBM Corporation                                Fax: 1-617-693-8676
One Rogers Street
Cambridge, MA 02142
------------------------------------------------------------------







"Roy T. Fielding" <fielding@apache.org>
04/26/2002 08:25 PM

 
        To:     noah_mendelsohn@us.ibm.com
        cc:     www-tag@w3.org
        Subject:        Re: whenToUseGet-7 Why call it WEB Serivces? (was: RE: FW: draft findings 
on  Unsafe Methods (whenToUseGet-7))


> Maybe, as some believe, REST will be fundamental to achieving ad hoc
> application interconnectivity.  Maybe REST will just be a piece of the
> puzzle -- or maybe Web services will achieve its universal
> interconnectivity using different conventions such as UDDI and WSDL.
> Regardless, I think there is a clear sense in which the term "Web" is
> appropriate.  BTW, and I know this is controversial:  I prefer to view
> REST as a means of achieving the Web's goals, not as a defining
> characteristic of the web.

I don't like the way that REST is sometimes advocated, mostly because
I hate it when people use the terminology that I created to explain
this stuff as some sort of mandate for a particular architecture.
The first three chapters of my dissertation clearly indicate why there
is no such thing as a universal, best-fit architecture.

REST is an architectural style that models system behavior for
network-based applications.  When an application on the Web is
implemented according to that style, it inherits the interconnectivity
characteristics already present in the Web.  REST's purpose is
to describe the characteristics of the Web such that they can be
used to maximum advantage -- it does not need to define them.
REST isn't supposed to be a baseball bat; it is supposed to be a
guide to understanding the design trade-offs, why they were made,
and what properties we lose when an implementation violates them.

If "Web Services" truly used URI to identify services, as in allowing
no other identifier for the service to exist within the envelope
outside of the target URI, and it properly reflected the semantics
of responses to that service within whatever application-layer protocol
it uses as its delivery binding, then it wouldn't be a danger to the
other systems that already use those application protocols correctly.
That doesn't mean it would be great, but then at least it wouldn't
actively cause harm.

This bears repeating: The difference between an application-level
protocol and a transport-level protocol is that an application-level
includes application semantics, by standard agreement, within
the messages that the protocol transfers.  That is why HTTP is called
a Transfer protocol.  It is impossible to conform to an
application-level protocol without also conforming faithfully to
its message semantics.

This DOES NOT mean that I expect SOAP to use GET when accessing services.
I have never once said that this was a requirement for the HTTP binding.
POST is every bit as much a part of HTTP (and REST) as GET.  It is
necessary for a resource to be able to respond to GET appropriately
in order for it to take advantage of the interconnectivity inherent
in the Web, but it is not necessary for all such services to be
interconnected with the rest of the Web.  It would be nice and beneficial
for such services to be so, but it isn't necessary for the health of
the rest of the Web.

What is necessary for the HTTP binding of SOAP is that the Request-URI
used in a message identify the resource being accessed (which means
either the service, if that is the only resource, or data within the
service if it is acting as a namespace for resources).  The Request-URI
must not simply identify a generic HTTP-SOAP gateway mechanism that
does dispatching of the message contents according to some hidden
identifiers within the SOAP envelope, because doing so introduces
too many security holes.  Furthermore, it is necessary that the
messages be stateless (carry all of the semantics for each request
within the request message) and that when a failure or redirection
occurs, the appropriate HTTP response code be given in the HTTP
envelope, and also the appropriate cache-control mechanisms be
included -- not at random, or by fiat of some clueless gateway
interface, but by actually inspecting the SOAP response for the
corresponding semantics and mapping those back out to the HTTP binding.

Until it does that, the SOAP HTTP binding should not be issued by the
W3C as a Recommendation.  Using HTTP for the sole purpose of tunneling
other protocols through firewalls must be explicitly forbidden by
standards bodies, even if Joe Hacker may do it on a regular basis.
Liability for that kind of service rests with the software developers.
If anyone doesn't like that, then they can bloody well use a raw
TCP port 80 tunnel instead of HTTP -- they gain nothing from HTTP's
overhead if they do not obey its constraints.

As a *separate* issue, Web Services cannot be considered an equal
part of the Web until they allow their own resources to become
interconnected with the rest of the Web.  That means using URI to
identify all important resources, even if those URI are only used
via non-SOAP mechanisms (such as a parallel HTTP GET interface) after
the SOAPified processing is complete.  The reason to do so is solely
for the benefit of those Web Services; so that they can participate
in the unforeseen ways in which the Web allows resources to be shared.
That is the principle of the Web architecture that I absolutely refuse
to water-down for the sake of any technology.  However, it is also a
principle that SOAP, just like HTTP, can only encourage -- it is a
byproduct of good information design, rather than a mandate of the
protocol.  Mind you, it does require SOAP to be able to do things
like allow the SOAP sender tell the SOAP receiver about the URI.


Cheers,

Roy T. Fielding, Chairman, The Apache Software Foundation
                  (fielding@apache.org)  <http://www.apache.org/>

                  Chief Scientist, Day Software
                  2 Corporate Plaza, Suite 150   tel:+1.949.644.2557 x102
                  Newport Beach, CA 92660-7929   fax:+1.949.644.5064
                  (roy.fielding@day.com) <http://www.day.com/>
Received on Tuesday, 7 May 2002 14:53:27 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:32:31 UTC