W3C home > Mailing lists > Public > www-ws-arch@w3.org > December 2002

Re: Hypermedia vs. data (long)

From: Miles Sabin <miles@milessabin.com>
Date: Tue, 31 Dec 2002 23:08:30 +0000
To: www-ws-arch@w3.org
Message-Id: <200212312308.30265.miles@milessabin.com>

Christopher B Ferris wrote,
> REST is based on the premise that the agent receiving the data has
> but one responsibility; to render it for (typically) human
> consumption. It is based on a limited set of standardized media types.
> It is low-coordination because the function of the user agent is
> simply to render these standardized media types.

In fairness to the REST-heads, that isn't its premise, and I don't think 
claiming it is will help Mark see round his blind-spot wrt the semantic 
issues.

The premise of REST, at least as far as I understand it, is that an 
important class of distributed applications can be modelled as state 
transition networks, with resources as nodes, URIs as edges, and the 
operation of the application implemented as transitions over and 
modifications of that network. Hypertext rendered for human consumption 
is one application of this model, certainly, but there's no reason 
whatsoever why it should be the only one.

But the issue Mark seems unwilling to address is the fact that the model 
stated this abstractly says nothing at all about the semantics of the 
application. Those semantics are the _interpretation_ of the graph: 
what the nodes/resources, edges/URIs, transitions and modifications 
_mean_ in the context of the correct functioning of the application.

That interpretation isn't present in the graph itself (unless you count 
not-currently-machine-digestable annotations in the resources) and has 
to come from somewhere else. It can't simply be inferred from the 
structure of the graph, for the simple reason that graphs have a nasty 
habit of being isomorphic to one another. If all you look at is the 
structure and don't have anything else to pin down the significance of 
the resources and URIs, many applications will end up being 
indistinguishable from each other: eg. a pair of apps identical other 
than having "buy" and "sell" transposed might have exactly the same 
structure in REST terms ... pretty clearly a problem on the not 
unreasonable assumption that "buy" and "sell" have rather different 
real-world significance.

Interpretation is the bit that REST leaves out (or, rather, simply rules 
out of scope), quite rightly IMO. But it's pretty clearly something 
that needs to be added if you want something that actually does 
something useful. In the case of hypertext it's added by a mixture of 
hard-coded behaviour in user-agents and intelligent behaviour on the 
part of users. If human intervention is ruled out for Web services, 
then that just leaves hard-coded behaviour of a greater or lesser 
degree of sophistication ... or verb inflation from Marks POV. I think 
that's both unavoidable and not in any obvious way damaging to REST as 
an architectural style.

In fairness to Mark tho', I think it's still a wide open question 
whether or not WSDL or DAML-S or whatever are up to the job either. But 
I don't think that justifies ostrich-like behaviour.

Cheers,


Miles
Received on Tuesday, 31 December 2002 18:09:02 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 3 July 2007 12:25:11 GMT