RE: Representing HTTP in arch diagrams

> -----Original Message-----
> From: Mark Baker [mailto:distobj@acm.org]
> Sent: Thursday, September 19, 2002 9:19 PM
> To: Champion, Mike
> Cc: www-ws-arch@w3.org
> Subject: Re: Representing HTTP in arch diagrams
> 
OK, let's look at a use case.  Let's say I want to build a bot that
tracks the scores of my favorite teams in real time and send me an IM
when something "interesting" happens.  Let's also presume that various
sites (CNN, the specific teams, maybe some local media) make this available
in some machine-processable form over the Web.  How do we proceed?

In the SOAP/WSDL world, one finds the WSDL description of the service (from
UDDI, Google, by world of mouth, or whatever), imports it into any one of
a number of tools, and generates the code to do the actual getCurrentScore()
or whatever invocation.

Naturally it's posssible to do this RESTfully:
  GET
http://www.mediaconglomorate.com/scores/current/football/college?team=umichi
gan
[I live in Ann Arbor] or whatever to get some XML description the bot can
use.  My question is  how does one find out the URL to GET, and how does one
find out the format of the response?
(OK, in WSDL 1.2 it should be as easy as with SOAP today, but I don't think
that's your point).

> A machine can work similarly.  Take the little-known anchor AII "rel",
> which is used to declare the type of link between the current 
> resource,
> and the resource identified by the href value.  If Yahoo had written;
> 
> <a rel="http://terms.example.org/activities/athletic/sports/"
>    href="http://www.yahoo.com/r/ys">Sports</a>
> 
> then a machine that recognized that URI as identifying the notion of
> "sports" could follow the link, if that's what it was looking for.  Of
> course, any app can follow any link, but it's a waste to get stuff you
> don't need over a network; hence the value in declaring the type
> alongside the link.

So how would this work in my example?  MediaConglomorate.com would have a
human-readable page with <a rel="blah" href="blahblah"> elements describing
the semantics of what I'd get by following the href links?  I can see how it
would help a human writing the code, I can't see how a general tool could do
it.  Sure, the world could define an XML standard for describing the HTTP
GET/POST generation and the format of the result ... but I believe that's
what WSDL is, no? And someday Real Soon Now the "rel" link might point to a
fragment of the Semantic Web that describes what to do, but I want to do
this now.

> 
> Description and discovery works the same way with or without humans in
> the loop, as I attempted to describe above.

OK, explain in my scenario how this is done without a human reading a page
and writing custom code?  Now explain how it would work for a non-idempotent
operation, e.g., the software that the sportswriter uses to update the
"scores" database?  I think we (mostly) all agree that WSDL should support
raw HTTP GET/POST here as well as the SOAP RPC design pattern, but you seem
to be saying that this could be done with just HTML descriptions.

> 
> Perhaps, but likely because they didn't consider how to use forms
> in a machine processable environment.

Well, I suspect that "they" invented SOAP and WSDL because HTTP POST is so
unconstrained that it required so much human communication to describe the
format of the POST and the data sent back!  I don't doubt that a deeper
understanding of Dr. Fielding's work would have led people in a somewhat
different direction than the early versions of SOAP and WSDL took, and I
agree that the RPC paradigm gets very, very messy when it comes face to face
with the real unreliability/unpredictability/insecurity/anarchy of the
Internet.  But you seem to be imply something much stronger, i.e. that
nothing *like* SOAP and WSDL are really needed, and I'm trying to figure out
what it is.

Received on Friday, 20 September 2002 00:07:16 UTC