- From: Christopher B Ferris <chrisfer@us.ibm.com>
- Date: Sat, 10 Aug 2002 10:09:50 -0400
- To: Mark Baker <distobj@acm.org>
- Cc: www-ws-arch@w3.org
Mark, Please see below. Cheers, Christopher Ferris Architect, Emerging e-business Industry Architecture email: chrisfer@us.ibm.com phone: +1 508 234 3624 Mark Baker <distobj@acm.org> To: "Champion, Mike" <Mike.Champion@SoftwareAG-USA.com> Sent by: cc: www-ws-arch@w3.org www-ws-arch-reque Subject: Re: Choreography and REST st@w3.org 08/10/2002 12:00 AM Mark Baker wrote: <snip/> The main characteristic of REST relevant to choreographing processes, or indeed any change of app state, is "hypermedia as the engine of application state"(*). That is, instead of explicitly invoking methods to change state, the change is implicit by virtue of a URI being declared together with the knowledge that GET on that URI will effect the state change. <cbf> This would be a violation of Web architecture. GET operations are supposed to be side effect free. A GET is supposed to be a "safe" operation. You don't change state with GET. You might GET a representation, tweak it and then PUT it back to effect a state change. You might GET a representation of a resource which is an HTML form, complete the form and POST it to the URI associated with the form's, but you are NEVER supposed to effect change via GET. </cbf> (*) "The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations. Not surprisingly, this exactly matches the user interface of a hypermedia browser. However, the style does not assume that all applications are browsers. In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content." -- http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 <cbf> We're not building bots or spiders that merely chase down links and snarf information from the representations those links return. If you want to follow this analogy, what we're doing is building software agents that replace the human element in the completion of the equivalent of an HTML form returned on a GET on a URI. This is where things start to get tricky. This is where you need to have a priori understanding of what "form" to expect such that the logic necessary to complete the "form" can be encoded in the "client". You cannot expect the software agent retrieving such a representation to be capable of dealing with uncertainty as to what it might anticipate it would be asked for in completing the "form". A human interacting with Amazon.com to purchase a book can deal with the unexpected because they can reason about it. It isn't like they need some manner of brain transplant in order to complete an unanticipated form so that they can proceed with their purchase as might be the case if Amazon decided to interject a questionaire between the book selection step in a process and the final submission of the credit card payment information. If the same situation were presented to a software agent that hadn't been programmed to deal with the questionaire, it would suffer a nervous breakdown and be unable to complete the process. In the case where there's a human involved, the agent (human) can work his way through a process directed by the server without ever having been told what process to expect. It is a bit premature to expect that the average software engineer can construct software agents with the same degree of daring. Sure, there's some seriously advanced work being done in this vein but it isn't yet at the level that is accessible to your average software engineer. </cbf> <snip/>
Received on Saturday, 10 August 2002 10:35:34 UTC