W3C home > Mailing lists > Public > www-ws-arch@w3.org > August 2002

Re: Choreography and the Semantic Web

From: Christopher B Ferris <chrisfer@us.ibm.com>
Date: Sun, 11 Aug 2002 09:22:39 -0400
To: www-ws-arch@w3.org
Message-ID: <OF946ED420.6AB8B352-ON85256C12.0045485F@rchland.ibm.com>

Please see below.


Christopher Ferris
Architect, Emerging e-business Industry Architecture
email: chrisfer@us.ibm.com
phone: +1 508 234 3624

                      Mark Baker                                                                                                  
                      <distobj@acm.org>        To:       Christopher B Ferris/Waltham/IBM@IBMUS                                   
                                               cc:       www-ws-arch@w3.org                                                       
                      08/10/2002 05:55         Subject:  Choreography and the Semantic Web                                        

On Sat, Aug 10, 2002 at 10:09:50AM -0400, Christopher B Ferris wrote:
> Mark Baker wrote:
> <snip/>

> <cbf>
> We're not building bots or spiders that merely chase down links and
> snarf
> information
> from the representations those links return. If you want to follow this
> analogy, what we're
> doing is building software agents that replace the human element in the
> completion of
> the equivalent of an HTML form returned on a GET on a URI.

Yes!  So why all the Web services specs that have nothing to do with
this?! 8-)

What makes you say this? In many regards, this pattern is fully realized in
Web services technology. WSDL is the general equivalent of the HTML form.
The "client" can retrieve (GET) the WSDL description for the service it
and use it to construct the appropriate message to send to the service
form POST). Of course, you're probably saying to yourself, but there's no
"wash, rinse, repeat" to this pattern, and I would tend to agree. However,
technology doesn't prevent it either.

> This is where
> things start to
> get tricky. This is where you need to have a priori understanding of
> what
> "form" to expect
> such that the logic necessary to complete the "form" can be encoded in
> the
> "client".

Yes, sort of.  An issue with REST is that the number of representations
is not constrained in any way, so you potentially have an explosion of
formats to worry about, and no a priori expectation that you'll be able
to do anything with what you retrieve.

HTML, by virtue of being a very general format for human consumption
(i.e. all resources can have an HTML representation), solved this
problem for humans because there came an expectation that if you said
"Accept: text/html", you'd get something for the vast majority of
resources.  What we need, is a similar format/model for machines, such
as RDF.

So, what I read into this is you are saying "throw out WSDL and SOAP
and use RDF instead". What's wrong with replacing the application/rdf+xml
you suggest above with application/soap+xml and/or application/wsdl+xml?

> You cannot expect the software agent retrieving such a representation to
> be
> capable of dealing with uncertainty as to what it might anticipate it
> would
> be asked for in completing the "form".

See below.

> A human interacting with Amazon.com to purchase a book can deal with the
> unexpected because they can reason about it. It isn't like they need
> some
> manner of brain transplant in order to complete an unanticipated form so
> that
> they can proceed with their purchase as might be the case if Amazon
> decided
> to
> interject a questionaire between the book selection step in a process
> and
> the
> final submission of the credit card payment information. If the same
> situation
> were presented to a software agent that hadn't been programmed to deal
> with
> the questionaire, it would suffer a nervous breakdown and be unable to
> complete
> the process.

Not if it were programmed to deal with this type of uncertainty.  This

As I have said, we aren't at the stage where your average software
engineer can do this, which is my point. You are dismissing a significant
impediment to widespread adoption with a wave of some magical pixie dust
that endows the software interacting with a Web service to "ask the cat"
as it were.

is the partial understanding[1] problem.  The agent may not know what a
questionaire is, but it can assume it's ok to ignore it (hence the
need for mandatory extensions in RDF), and proceed to the next step.

SOAP already provides for mandatory extensions, as you well know. However,
I'm not talking about extensions. I'm talking about a process and about
a piece of software that is expecting foo but is presented with bar.

> In the case where there's a human involved, the agent (human) can work
> his
> way through a process directed by the server without ever having been
> told
> what process to expect. It is a bit premature to expect that the average
> software engineer can construct software agents with the same degree of
> daring. Sure, there's some seriously advanced work being done in this
> vein
> but it isn't yet at the level that is accessible to your average
> software
> engineer.
> </cbf>

I think that the principle reason it isn't accessible, is not because
the work isn't ready; it is ready in the sense that no more research is
required.  But standardization is.  The reason it isn't accessible is
that the *tools* aren't ready.  There's no "libwww" for the Semantic
Web, yet.

Hmmm... no strong typing, rampant disagreement as to use of fragment
identifiers, no canonical expression that facilitates parsing, I'd say
it isn't quite ready.

But then, why are you bringing this discussion here? Shouldn't you be
harping on the SW crowd to add mandatory extensions, standardize and
develop tooling such that the technology can be accessible to a
broader audience of engineers?

 [1] http://www.w3.org/DesignIssues/Evolution.html#PartialUnderstanding

Mark Baker, CTO, Idokorro Mobile (formerly Planetfred)
Ottawa, Ontario, CANADA.               distobj@acm.org
http://www.markbaker.ca        http://www.idokorro.com
Received on Sunday, 11 August 2002 09:32:17 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 23:05:36 UTC