RE: REST; good for humans and machines

> On Sun, Jan 05, 2003 at 10:46:45AM -0700, Champion, Mike wrote:
> > So, I'll clarify what I said earlier:  If the representations
> being passed
> > around are being processed by humans, the REST interfaces are
> sufficient --
> > they just have to deliver the information, and the human reads
> it (or fills
> > out the form, or finds an appropriate hyperlink, or whatever).
>
> Wait a sec, Roy said that REST is suitable for automata.  That means no
> humans are required in the loop.  Did he not make that clear?

In the hypermedia model you have a machine that fetched a document with
links and traverse these links. If you have a uniform model (HTTP GET), then
all links are equivalent, i.e. they point to a document you can retrieve
(they are actually URLs). So you can automate the process of traversing
links, e.g. a spider, a search engine, a cache engine.

When we get to purchase order management, not all links are equivalent. One
will just retrieve the purchase order status, another would cancel it, yet
another would change the shipment time. A machine is not goint to start with
a purchase document and randomly traverse all the links, the links have to
be qualified.

Qualifying links is where you define the interface for the application. You
assign significance to operations so you can develop code that, based on
logic in the code, decides to traverse one link vs another, or not follow
any link. This is where you need a service description language.

The premise of software that operates on signifinace of data is that they
need this service definition.

The premise of software that operates on arbitrary documents is that it does
not need a service definition only a raw protocol.

When you build a Web service today you are actually building the former on
top of the later. The former users WSDL, the later uses HTTP (or SMTP or
IIOP or FTP).

I think at this point in our lives we got the mechanics of HTTP boiled down
and so we're trying to address what comes on top of that. This doesn't
dispute the value of HTTP, it just means we need something that works on top
of it. Roy's thesis continusouly claims that REST is designed to work
strictly at the level of arbitrary document retrieval.

arkin


>
> > I just can't get worked up over the
> > distinction between POSTing a getLastSharePriceOfIBM message
> and GETing a
> > http://www.stockquotes.com/ibm/lastshareprice resource. There
> are advantages
> > and disadvantages of both -- the former is easier to automate,
> leverages XML
> > more readily, avoids URI encoding issues ... the latter is
> hyperlinkable,
> > cacheable, more easily integrated with the human-oriented web,
> etc.  I take
> > the rather quixotic job of replying to this permathread because
> I'd love to
> > see a detailed enumeration of the advantages and disadvantages
> so that we
> > can put them in the WSA document.
>
> There's only one point in this enumeration that matters when it comes to
> working across trust boundaries; late binding via a coordination
> language.
>
> So I'll give you +1 for "leverages XML" for getLastSharePriceOfIBM, but
> -1000 for not being late bound.  That's the magnitude of the
> archtectural implications we're talking about here, and what I mean by
> "low coordination costs"; fundamentally, with REST and other Internet
> scale architectural styles, you're agreeing on higher level stuff up
> front, so things are made easier at runtime.
>
> Can you not see how much easier it is to coordinate the communication
> of information between untrusted parties with URIs and GET?  Or do you
> see it, but don't feel that the difference matters?  An answer to that
> would help me focus my responses.
>
> MB
> --
> Mark Baker.   Ottawa, Ontario, CANADA.        http://www.markbaker.ca
> Web architecture consulting, technical reports, evaluation & analysis
>

Received on Sunday, 5 January 2003 15:22:20 UTC