W3C home > Mailing lists > Public > www-ws-arch@w3.org > July 2002

Re: Harvesting REST

From: Paul Prescod <paul@prescod.net>
Date: Tue, 16 Jul 2002 21:38:28 -0700
Message-ID: <3D34F4C4.F218915E@prescod.net>
To: David Orchard <dorchard@bea.com>, www-ws-arch@w3.org
CC: "'Mark Baker'" <distobj@acm.org>, "'Champion, Mike'" <Mike.Champion@softwareag-usa.com>

David Orchard wrote:
> 
> hmm.  When I was on the XLink WG, the charter specifically was for
> "Hypertext" links, which always involved a user-agent. 

In 1998 one of the reasons it was so great to be involved with XML was
that we were going to blur the distinction between data processing and
document processing. I don't understand the motivation today to build
these walls back up. It strikes me as more rhetorical than practical.
"REST is for THIS but we're doing THAT." Sometimes things are good for
both THIS and THAT. And sometimes THIS and THAT turn out to be more
similar than you think if you don't go out of your way to separate them.

>  Hence the
> onload/onrequest actuation axis, etc.  Also why ranges (for drag'n drop in
> gui apps) were added for xpointers.
> 
> There was explicit acknowledgement that hypertext linking was different than
> more general purpose xml linking.

XLink does not make such a distinction: "A link, as the term is used [in
XLink], is an explicit relationship between two or more data objects or
portions of data objects." Nothing about user agents. 

But is the terminology worth arguing over? Roy Fielding says that REST
is appropriate to hypermedia. Mark Baker says that it is appropriate to
web services. Even if you define hypermedia as belly dancing there is no
contradiction between those two statements. In my experience, REST
handles all of the sam web services scenarios that SOAP/WSDL does, but
with a much higher degree of standardization and interoperability. 

It is not surprising that this is so: the Web itself must already handle
situations where reliability is paramount, where security is vital,
where asynchrony is required etc. etc. The only difference is that now
computers are the user agents rather than people and computers are
stupider so they need pre-parsed XML rather than HTML. We do not have to
reinvent the architecture from scratch. We only need to slip in XML for
HTML as we promised to do four years ago when we defined XML.

-- 
Come discuss XML and REST web services at:
  Open Source Conference: July 22-26, 2002, conferences.oreillynet.com
  Extreme Markup: Aug 4-9, 2002,  www.extrememarkup.com/extreme/
Received on Wednesday, 17 July 2002 00:39:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 3 July 2007 12:25:02 GMT