W3C home > Mailing lists > Public > www-ws-arch@w3.org > January 2003

GET vs GetLastTradePrice...Is a unified interface bad?

From: Edwin Khodabakchian <edwink@collaxa.com>
Date: Thu, 2 Jan 2003 11:10:44 -0800
To: "'David Orchard'" <dorchard@bea.com>, "'Champion, Mike'" <Mike.Champion@softwareag-usa.com>, <www-ws-arch@w3.org>
Message-ID: <000701c2b292$a78ab550$690aa8c0@collaxa.net>

Hi Dave,

Happy New Year. I have 2 comments regarding 2 posts you made earlier:

Comment #1:
> 2.  Optimization for GET means that optimization for another
> verb, say POST, is harder.  Optimizing for POST means that 
> supporting multiple protocols is easier.  By focusing on 1 
> HTTP Verb, it means that the Web services developer can more 
> easily deploy their application to multiple protocols. For 
> example, in our workshop tools, a developer can visually 
> "tick" which protocols they want to use - HTTP SOAP/POST, 
We have the same feature in our product but we realized through a few
deployments that protocol transparency was NOT achievable because of
exception management: different type of remote exceptions need to be
handled differently. I am wondering if you have been running into the
same problem.

Comment #2:
> If there is any argument that would convince me that GET is
> better than GetLastTradePrice, it would be the firewall 
> argument.
What about adaptability and version control? Wouldn't creating a unified
interface and limiting the number of verbs reduce the coupling between
applications? Why not let the architecture enforce loose coupling?
(Note: I am not claiming that GET, PUT, DELETE and POST are the right
verbs. I am just thinking that creating a unified interface would
constraint application development in a good way. As Mike pointed out
this is may be just a design pattern but may be one that would benefit
from being baked in the architecture).

Finally, looking back at some of the projects that our customers have
being undertaking over the last 6 months, it seems to me that one of the
biggest challenges that developers face when building a distributed
service-oriented applications is NOT actually composition but
DECOMPOSITION. Meaning "how do I break down the problem/requirements
into a set of services?". And here again more constraints from the
architecture might be a good thing. [Note: It is interesting to see that
that problem does not exist for the web where page and links are natural
way to break down a user interface/document]

Anyway, the group has done very good progress towards a middle ground
with support for URI and GET, may be a unified interface could be the
next thing to evaluate both for adaptability, helping developers through
decomposition and may be firewall.

One final note: Unified interface does NOT necessarely mean "no WSDL"
because there is still a requirement and need to get meta information
regarding the XML types/schema consumed and produced by services and
header information (security, asynchrony, RM, transaction, etc...)



> -----Original Message-----
> From: www-ws-arch-request@w3.org 
> [mailto:www-ws-arch-request@w3.org] On Behalf Of David Orchard
> Sent: Thursday, January 02, 2003 9:38 AM
> To: 'Champion, Mike'; www-ws-arch@w3.org
> Subject: RE: Issue 5; GET vs GetLastTradePrice
> Mike,
> Interesting POV.  I offer my limited analysis.  The web is 
> optimized for ad-hoc retrieval of resources, specifically 
> around using GET.  It's the default method given only a URI.  
> Therefore people can easily post (:-) them on billboards, 
> send via email etc.
> In the web context, it makes huge sense to be able to give 
> around URLs.  And the default method of GET is completely 
> required for this success.  I want to re-state my position, 
> which is that the web was successful because of the 
> combination of URIs and the default use of GET.  And when 
> people use POST on web sites, they typically use it 
> interchangeably with GET.  I've proven this point in the 
> past.  Therefore the web architecture may be designed around 
> REST, but in actually it's almost 100% deployed in a simple 
> one method invoke style, and this is the case that the web is 
> brilliantly optimized for.  Almost nobody deploys 
> GET/PUT/POST/DELETE methods on the same URI. 99.9% of sites 
> treat POST/GET as interchangeable.
> The question from a Web services perspective, is does this 
> same optimization make sense?  Do we have programs that 
> already have prebuilt "method"s, or even a default method, 
> and we simply have to punch in new addresses?  I don't think so.
> IMO, ad-hoc retrieval of representations doesn't make as much 
> sense in a machine to machine world, as compared to a 
> hypermedia world.
> The architectural middle ground that I believe we are moving 
> towards is a model that is optimized for program creation - 
> hence the necessity of wsdl - and allowing for ad-hoc 
> invocation - hence the support for GET.  The optimization is 
> different, but the features are supported.  I also believe 
> that the default method for web services will be the POST method.
> One of the areas that we (Web services) still have work to do 
> is in figuring out how to deploy Web services across trust 
> boundaries.  The use of tcp ports and port firewalls make 
> sense when it was really hard to create and deploy 
> applications.  Think of how long it took to get the protocols 
> over the various internet ports figured out.  But now we want 
> to scale this for gajillions of applications while wanting 
> the firewall admin to actually do a reasonable job at setting 
> up the right policy.
> To me, XML is the thing that breaks the previous model of 
> dealing with internet level application deployment.  10 years 
> ago, it was fairly hard to create document formats and 
> protocols.  XML made it orders of magnitude easier to create 
> formats.  SOAP makes it an order of magnitude easier to 
> create .... XML based protocols.  People that focus on the 
> use of URIs and GET as the focus on app development, IMO tend 
> to ignore the whole centralized registry of ports and 
> description of the HTTP protocol as well as the commensurate 
> programming that the HTTP developers had to do.
> If there is any argument that would convince me that GET is 
> better than GetLastTradePrice, it would be the firewall 
> argument.  But having spent a few times discussing the 
> realities of deployment of the various options with MarkB, I 
> remain unconvinced that the firewall admin has any greater 
> insight or control into application of security policy under 
> one or the other choice.
> Which makes me believe that we still have a hole in the web 
> services architecture on what needs to be in soap messages 
> and wsdl definitions in order to allow a firewall admin to do 
> their job better.  SOAPAction was one attempt at this.  
> WS-Routing Action was another.  There are various other 
> approaches.  But I don't think we've taken a very systematic 
> and thorough approach at looking at the use case of a web 
> service being developed and deployed by developers and 
> security people.
> Cheers,
> Dave
> > -----Original Message-----
> > From: www-ws-arch-request@w3.org 
> [mailto:www-ws-arch-request@w3.org]On
> > Behalf Of Champion, Mike
> > Sent: Thursday, January 02, 2003 8:41 AM
> > To: www-ws-arch@w3.org
> > Subject: RE: Issue 5; GET vs GetLastTradePrice
> >
> >
> >
> >
> >
> > > -----Original Message-----
> > > From: Walden Mathews [mailto:waldenm@optonline.net]
> > > Sent: Thursday, January 02, 2003 10:37 AM
> > > To: Newcomer, Eric
> > > Cc: www-ws-arch@w3.org
> > > Subject: Re: Issue 5; GET vs GetLastTradePrice
> > >
> > > I'd like to get clearer on what that middle ground is.  
> Last summer 
> > > I got involved in a project that had already decided to 
> use XML in a 
> > > "document" mode as opposed to a "RPC" mode, but the
> > > distinction was only skin deep, at least according to my
> > > analysis.
> >
> > I sometimes suspect that too.  As someone said in one of 
> these recent 
> > threads, it would be hard to distinguish an instance of one 
> from the 
> > other using a protocol analyzer. The distinctions do seem 
> more at the 
> > design pattern level-- do you CONCEIVE of the message as a method
> > invocation with
> > arguments that gets directly mapped onto a procedure call of
> > some sort, or
> > do you conceive of it as a business document to be acted on by some
> > intermediate software that interprets the data and indirectly
> > invokes the
> > back-end software.
> >
> > I'm  seeing this more and more as an engineering question 
> of finding 
> > the optimal degree of coupling in a particular web application
> > than as some huge
> > meta-question of competing paradigms. The tighter coupling of
> > direct mapping
> > of method invocations (synchronous or asynchronous) on one
> > system to another
> > via SOAP messages makes sense in stable, well-managed systems where
> > performance considerations are paramount; the looser coupling
> > of business
> > document exchange makes sense in more dynamic, hapharzardly
> > managed systems
> > where ease of discovery by new "customers" is more important than
> > performance for existing customers.
> >
> >
> >
> >
> >
> >
> >
> >
Received on Thursday, 2 January 2003 14:11:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:41:01 UTC