RE: Hypermedia vs. data (long)

One other thing worth pointing out on this thread, beyond endorsing Chris's and Miles' excellent texts, is that HTTP was designed to be extensible.  

I've recently been poking around a bit in HTTP specs and info.  HTTP 1.0 was initially out in  1990, and underwent a major revision in June 1999 with the release of HTTP 1.1.  Roy's thesis appeared in 2000.

At the top of the 1.1 spec it says about HTTP:

"It is a generic, stateless, protocol which can be used for
   many tasks beyond its use for hypertext, such as name servers and
   distributed object management systems, through extension of its
   request methods, error codes and headers..."

So HTTP 1.1 includes the idea of extensibility.  There was the HTTP Extensible Framework spec, and HTTP-NG spec along these lines, both of which have not seemed to gain tremendous acceptance for whatever reason.  

An early presentations about SOAP from Henrik Neilsen to the W3C community described SOAP as a kind of extension to HTTP.

The point is that HTTP was designed to be extensible, the authors recognized the need to extend HTTP to meet the requirements of other types of applications of the Web (such as Web services), and among the attempts to extend HTTP in this direction, SOAP and WSDL have achieved the widest adoption (for whatever reasons).

To argue that a thesis published in 2000 that essentially summarizes experiences and work around HTTP 1.1 constitutes a kind of constraining architecture for all Web applications seems counter to the history of HTTP itself, which includes the ideas that HTTP will need to be extended.  

It seems to me there may be some valid argument over which approach to extension has the most merit, but since actual adoption and implementation of technology is not always aligned with theory, it seems much less valuable to continue to argue about whether SOAP and WSDL make sense as HTTP extensions in the absence of any other approach having gained comparable acceptance. 

A lot of this boils down, as we've discussed before, to whether or not semantic information can be contained in the documents being exchanged, or at least whether or not pointers to such semantic information can be contained within the documents being exchanged.  REST would say no, but extensions to HTTP might well say yes, as they have.

Eric

-----Original Message-----
From: Miles Sabin [mailto:miles@milessabin.com]
Sent: Tuesday, December 31, 2002 6:09 PM
To: www-ws-arch@w3.org
Subject: Re: Hypermedia vs. data (long)



Christopher B Ferris wrote,
> REST is based on the premise that the agent receiving the data has
> but one responsibility; to render it for (typically) human
> consumption. It is based on a limited set of standardized media types.
> It is low-coordination because the function of the user agent is
> simply to render these standardized media types.

In fairness to the REST-heads, that isn't its premise, and I don't think 
claiming it is will help Mark see round his blind-spot wrt the semantic 
issues.

The premise of REST, at least as far as I understand it, is that an 
important class of distributed applications can be modelled as state 
transition networks, with resources as nodes, URIs as edges, and the 
operation of the application implemented as transitions over and 
modifications of that network. Hypertext rendered for human consumption 
is one application of this model, certainly, but there's no reason 
whatsoever why it should be the only one.

But the issue Mark seems unwilling to address is the fact that the model 
stated this abstractly says nothing at all about the semantics of the 
application. Those semantics are the _interpretation_ of the graph: 
what the nodes/resources, edges/URIs, transitions and modifications 
_mean_ in the context of the correct functioning of the application.

That interpretation isn't present in the graph itself (unless you count 
not-currently-machine-digestable annotations in the resources) and has 
to come from somewhere else. It can't simply be inferred from the 
structure of the graph, for the simple reason that graphs have a nasty 
habit of being isomorphic to one another. If all you look at is the 
structure and don't have anything else to pin down the significance of 
the resources and URIs, many applications will end up being 
indistinguishable from each other: eg. a pair of apps identical other 
than having "buy" and "sell" transposed might have exactly the same 
structure in REST terms ... pretty clearly a problem on the not 
unreasonable assumption that "buy" and "sell" have rather different 
real-world significance.

Interpretation is the bit that REST leaves out (or, rather, simply rules 
out of scope), quite rightly IMO. But it's pretty clearly something 
that needs to be added if you want something that actually does 
something useful. In the case of hypertext it's added by a mixture of 
hard-coded behaviour in user-agents and intelligent behaviour on the 
part of users. If human intervention is ruled out for Web services, 
then that just leaves hard-coded behaviour of a greater or lesser 
degree of sophistication ... or verb inflation from Marks POV. I think 
that's both unavoidable and not in any obvious way damaging to REST as 
an architectural style.

In fairness to Mark tho', I think it's still a wide open question 
whether or not WSDL or DAML-S or whatever are up to the job either. But 
I don't think that justifies ostrich-like behaviour.

Cheers,


Miles

Received on Wednesday, 1 January 2003 11:49:31 UTC