W3C home > Mailing lists > Public > www-ws-arch@w3.org > August 2002

Choreography and the Semantic Web

From: Mark Baker <distobj@acm.org>
Date: Sat, 10 Aug 2002 17:55:00 -0400
To: Christopher B Ferris <chrisfer@us.ibm.com>
Cc: www-ws-arch@w3.org
Message-ID: <20020810175500.D8045@www.markbaker.ca>

On Sat, Aug 10, 2002 at 10:09:50AM -0400, Christopher B Ferris wrote:
> Mark Baker wrote:
> 
> <snip/>
> 
> The main characteristic of REST relevant to choreographing processes, or
> indeed any change of app state, is "hypermedia as the engine of
> application state"(*).  That is, instead of explicitly invoking methods
> to change state, the change is implicit by virtue of a URI being
> declared together with the knowledge that GET on that URI will effect
> the state change.
> 
> <cbf>
> This would be a violation of Web architecture. GET operations are
> supposed
> to be
> side effect free. A GET is supposed to be a "safe" operation. You don't
> change
> state with GET. You might GET a representation, tweak it and then PUT it
> back
> to effect a state change. You might GET a representation of a resource
> which
> is an HTML form, complete the form and POST it to the URI associated
> with
> the
> form's, but you are NEVER supposed to effect change via GET.
> </cbf>

You're not supposed to effect change on the *server*.  I'm talking about
effecting a change in the client, such as if one was initially in a
state where it doesn't know what Sun's stock price is, but after
invoking GET on a URI that identifies Sun's stock price, then it arrives
at a state where it does know the price.

> <cbf>
> We're not building bots or spiders that merely chase down links and
> snarf
> information
> from the representations those links return. If you want to follow this
> analogy, what we're
> doing is building software agents that replace the human element in the
> completion of
> the equivalent of an HTML form returned on a GET on a URI.

Yes!  So why all the Web services specs that have nothing to do with
this?! 8-)

> This is where
> things start to
> get tricky. This is where you need to have a priori understanding of
> what
> "form" to expect
> such that the logic necessary to complete the "form" can be encoded in
> the
> "client".

Yes, sort of.  An issue with REST is that the number of representations
is not constrained in any way, so you potentially have an explosion of
formats to worry about, and no a priori expectation that you'll be able
to do anything with what you retrieve.

HTML, by virtue of being a very general format for human consumption
(i.e. all resources can have an HTML representation), solved this
problem for humans because there came an expectation that if you said
"Accept: text/html", you'd get something for the vast majority of
resources.  What we need, is a similar format/model for machines, such
as RDF.

> You cannot expect the software agent retrieving such a representation to
> be
> capable of dealing with uncertainty as to what it might anticipate it
> would
> be asked for in completing the "form".

See below.

> A human interacting with Amazon.com to purchase a book can deal with the
> unexpected because they can reason about it. It isn't like they need
> some
> manner of brain transplant in order to complete an unanticipated form so
> that
> they can proceed with their purchase as might be the case if Amazon
> decided
> to
> interject a questionaire between the book selection step in a process
> and
> the
> final submission of the credit card payment information. If the same
> situation
> were presented to a software agent that hadn't been programmed to deal
> with
> the questionaire, it would suffer a nervous breakdown and be unable to
> complete
> the process.

Not if it were programmed to deal with this type of uncertainty.  This
is the partial understanding[1] problem.  The agent may not know what a
questionaire is, but it can assume it's ok to ignore it (hence the
need for mandatory extensions in RDF), and proceed to the next step.

> In the case where there's a human involved, the agent (human) can work
> his
> way through a process directed by the server without ever having been
> told
> what process to expect. It is a bit premature to expect that the average
> software engineer can construct software agents with the same degree of
> daring. Sure, there's some seriously advanced work being done in this
> vein
> but it isn't yet at the level that is accessible to your average
> software
> engineer.
> </cbf>

I think that the principle reason it isn't accessible, is not because
the work isn't ready; it is ready in the sense that no more research is
required.  But standardization is.  The reason it isn't accessible is
that the *tools* aren't ready.  There's no "libwww" for the Semantic
Web, yet.

 [1] http://www.w3.org/DesignIssues/Evolution.html#PartialUnderstanding

MB
-- 
Mark Baker, CTO, Idokorro Mobile (formerly Planetfred)
Ottawa, Ontario, CANADA.               distobj@acm.org
http://www.markbaker.ca        http://www.idokorro.com
Received on Saturday, 10 August 2002 17:54:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 3 July 2007 12:25:04 GMT