RE: section 1, intro, for review

I think I got it, or pretty close at any rate.  But I beg to differ with
some of your assertions.

IMO, the web worked with REST really well because we had these browsers that
understood GET and POST.  While REST specified PUT, DELETE, and other
standard well-defined interfaces, most browers, servers and web sites worked
with GET and POST.  Further, authors have blatantly mixed GET/POST semantics
throughout the web.  Most web authors use GET/POST depending upon what their
companies "best practices" say, and it might say something like "use GET
when you want to bookmark something" or "use GET if you don't have too much
data" or "use POST for security reasons" or some other rules.  I did a quick
few searches and found some interesting best practices [1]...[5].  But I
couldn't find any messages that distinguished between the GET/POST from a
REST perspective, outside of the W3C style page and mailing lists with the
expected suspects.  Every discussion of GET/POST usage was around authoring
of FORMS.  I couldn't find a single reference that talked about
implementation on the server that discussed when servlets/jsps/asp/php/perl
should use GET versus POST, just the client interaction issues.

To a certain extent, the web "got away with" having a weakly typed
interface, because it embedded the names and values in either GET parameters
or POST parameters.  Very conveniently, if the form that needed the
parameters ever changed, well the user just got a new form.  So the client
generally always had a way of refetching the interface.  But that doesn't
work for machine to machine communication.  Machines don't adjust very well
to changes in the interface, so we at least want our software to be able to
make it easier for humans to recover.

Further, the types of documents retrieved on the web were generally not
extensible, typically well defined document types such as HTML, GIF, JPEG.
XML changes the playing field.  I argue that the ability for clients and
servers to have arbitrary documents flow between them means that they will
rightfully want to embed more complex control.

I'm not sure the REST approach with clean separation of GET/POST/PUT/DELETE
was really THE deciding factor in the success of the web.  Seems to me that
people mix and match GET/POST based upon various requirements that have
little to do with idempotency versus updates or REST ideals.  And the choice
is derived from the way a human would interact based upon the browser/server
interaction characteristics.  The uniformity of the interface was HTTP
GET/POST with HTML and URIs.  Intermediaries could deal with the information
streams precisely because they were information streams.  Arguing that the
success of the web was due to REST, with GET/POST/DELETE/PUT interfaces, and
that violating these is in complete and shameless violation of the "web" is
stretching and revising history.


Cheers,
Dave

[1] http://www.its.monash.edu.au/web/slideshows/security/slide15-0.html
[2] http://www.htmlhelp.com/faq/cgifaq.2.html#9
[3] http://www.brinkster.com/Articles/HTML/GetOrPost.asp
[4]
http://archive.ncsa.uiuc.edu/SDG/Software/Mosaic/Docs/fill-out-forms/overvie
w.html
[5] http://html.about.com/library/weekly/aa072699.htm


> -----Original Message-----
> From: Roy T. Fielding [mailto:fielding@apache.org]
> Sent: Wednesday, March 20, 2002 4:13 PM
> To: David Orchard
> Cc: www-tag@w3.org
> Subject: Re: section 1, intro, for review
>
>
> On Tue, Mar 19, 2002 at 10:12:52PM -0800, David Orchard wrote:
> > Roy,
> >
> > I'd like to understand your rant a bit more.  This
> shouldn't be interpreted
> > as agreement, just me seeking understanding.  I think you
> are saying that if
> > people want to create object-specific interfaces using
> URIs, XML, HTTP, then
> > they shouldn't call it anything to do with the web.  More
> like "XML Internet
> > Services" or something like that.  That the notion of a
> shared information
> > space with well-defined interfaces is core to the web.  Not
> usage of URIs,
> > HTTP, Markup.  Those are helpful and interesting and good
> practice and ....
> > but not core to the web.
>
> It is also core to HTTP.  SOAP is not HTTP compliant because it ships
> actions with the content that contradict the application
> semantics described
> in the control data of an HTTP message.  That breaks intermediares.
>
> SOAP over something like BEEP does not suffer from that
> problem and I would
> call that an XML service on the Internet.
>
> > Further, attempts to put object interfaces onto the web - like
> > CORBA/DCOM/RMI - failed because they didn't use
> well-defined interfaces.
>
> You mean objects interfaces on the Internet, right?
>
> No, they had very well defined interfaces.  Exceptionally
> well.  Defined
> so well that they made an application exceedingly fragile to
> version drift
> and differences between ORBs.  Hence, they did not survive multiple
> organizational boundaries when deployed as application infrastructure.
>
> > Their "failure" has nothing to do with complexity,
>
> Complexity was due to object-specific interfaces.
>
> > lack of implementations,
>
> They far outnumbered Web implementations at the time.
>
> > too early to market, binary formats, bootstrap problems, no
> buy-in across a
> > big enough community or other issues.  I think that this
> implies that if
> > CORBA/DCOM/RMI had used HTTP PUT/POST/DELETE/GET in a
> RESTful style, they
> > would have had a much better chance of success.  You said
> this was the
> > lesson to learn from their failures.
>
> I think so, yes.  The money placed on CORBA and DCOM,
> separately, dwarfs
> that spent on the Web.  But CORBA/DCOM/RMI are all distributed object
> architectures, so it wouldn't have made any sense for them to
> be REST-like.
> REST doesn't use strong typing and focuses on data streams rather than
> parameter values.  They are different beasts.  The point is
> that we lose
> the properties that makes the Web work when we introduce
> strong typing,
> object-specific interfaces, etc.
>
> > Expressed a different way, the web succeeded because it was
> loosely coupled.
> > The use of well defined interfaces is the essensial element
> in this loose
> > coupling.  The use of well-defined interfaces allows
> clients and servers to
> > communicate without knowing the specifics of the resource.
> Putting an
> > object interface onto a URI effectively tightly couples the
> sender/reciever
> > in a way that should never be considered part of the web.
> The problem with
> > a non well-defined interface is that the client now has to
> discover the
> > interface (or whether it's changed) and create/change the
> sending messages.
> > With well-defined interfaces, the components can talk to
> each other without
> > this discovery/interface compilation step, which will
> scale/adapt/perform/be
> > more reliable, etc.
>
> The difference between well-defined object-specific
> interfaces and defined
> uniform interfaces is that the latter scales better with
> intermediaries
> and with unanticipated forms of client.
>
> ....Roy
>

Received on Wednesday, 27 March 2002 18:06:06 UTC