- From: Christopher B Ferris <chrisfer@us.ibm.com>
- Date: Sun, 11 Aug 2002 09:22:39 -0400
- To: www-ws-arch@w3.org
Please see below. Cheers, Christopher Ferris Architect, Emerging e-business Industry Architecture email: chrisfer@us.ibm.com phone: +1 508 234 3624 Mark Baker <distobj@acm.org> To: Christopher B Ferris/Waltham/IBM@IBMUS cc: www-ws-arch@w3.org 08/10/2002 05:55 Subject: Choreography and the Semantic Web PM On Sat, Aug 10, 2002 at 10:09:50AM -0400, Christopher B Ferris wrote: > Mark Baker wrote: > > <snip/> <snip/> > <cbf> > We're not building bots or spiders that merely chase down links and > snarf > information > from the representations those links return. If you want to follow this > analogy, what we're > doing is building software agents that replace the human element in the > completion of > the equivalent of an HTML form returned on a GET on a URI. Yes! So why all the Web services specs that have nothing to do with this?! 8-) <cbf> What makes you say this? In many regards, this pattern is fully realized in Web services technology. WSDL is the general equivalent of the HTML form. The "client" can retrieve (GET) the WSDL description for the service it desires and use it to construct the appropriate message to send to the service (HTML form POST). Of course, you're probably saying to yourself, but there's no "wash, rinse, repeat" to this pattern, and I would tend to agree. However, the technology doesn't prevent it either. </cbf> > This is where > things start to > get tricky. This is where you need to have a priori understanding of > what > "form" to expect > such that the logic necessary to complete the "form" can be encoded in > the > "client". Yes, sort of. An issue with REST is that the number of representations is not constrained in any way, so you potentially have an explosion of formats to worry about, and no a priori expectation that you'll be able to do anything with what you retrieve. HTML, by virtue of being a very general format for human consumption (i.e. all resources can have an HTML representation), solved this problem for humans because there came an expectation that if you said "Accept: text/html", you'd get something for the vast majority of resources. What we need, is a similar format/model for machines, such as RDF. <cbf> So, what I read into this is you are saying "throw out WSDL and SOAP and use RDF instead". What's wrong with replacing the application/rdf+xml you suggest above with application/soap+xml and/or application/wsdl+xml? </cbf> > You cannot expect the software agent retrieving such a representation to > be > capable of dealing with uncertainty as to what it might anticipate it > would > be asked for in completing the "form". See below. > A human interacting with Amazon.com to purchase a book can deal with the > unexpected because they can reason about it. It isn't like they need > some > manner of brain transplant in order to complete an unanticipated form so > that > they can proceed with their purchase as might be the case if Amazon > decided > to > interject a questionaire between the book selection step in a process > and > the > final submission of the credit card payment information. If the same > situation > were presented to a software agent that hadn't been programmed to deal > with > the questionaire, it would suffer a nervous breakdown and be unable to > complete > the process. Not if it were programmed to deal with this type of uncertainty. This <cbf> As I have said, we aren't at the stage where your average software engineer can do this, which is my point. You are dismissing a significant impediment to widespread adoption with a wave of some magical pixie dust that endows the software interacting with a Web service to "ask the cat" as it were. </cbf> is the partial understanding[1] problem. The agent may not know what a questionaire is, but it can assume it's ok to ignore it (hence the need for mandatory extensions in RDF), and proceed to the next step. <cbf> SOAP already provides for mandatory extensions, as you well know. However, I'm not talking about extensions. I'm talking about a process and about a piece of software that is expecting foo but is presented with bar. </cbf> > In the case where there's a human involved, the agent (human) can work > his > way through a process directed by the server without ever having been > told > what process to expect. It is a bit premature to expect that the average > software engineer can construct software agents with the same degree of > daring. Sure, there's some seriously advanced work being done in this > vein > but it isn't yet at the level that is accessible to your average > software > engineer. > </cbf> I think that the principle reason it isn't accessible, is not because the work isn't ready; it is ready in the sense that no more research is required. But standardization is. The reason it isn't accessible is that the *tools* aren't ready. There's no "libwww" for the Semantic Web, yet. <cbf> Hmmm... no strong typing, rampant disagreement as to use of fragment identifiers, no canonical expression that facilitates parsing, I'd say it isn't quite ready. But then, why are you bringing this discussion here? Shouldn't you be harping on the SW crowd to add mandatory extensions, standardize and develop tooling such that the technology can be accessible to a broader audience of engineers? </cbf> [1] http://www.w3.org/DesignIssues/Evolution.html#PartialUnderstanding MB -- Mark Baker, CTO, Idokorro Mobile (formerly Planetfred) Ottawa, Ontario, CANADA. distobj@acm.org http://www.markbaker.ca http://www.idokorro.com
Received on Sunday, 11 August 2002 09:32:17 UTC