W3C home > Mailing lists > Public > www-ws-arch@w3.org > February 2003

RE: A Priori Information (Was Snapshot of Web Services Glossary )

From: Champion, Mike <Mike.Champion@SoftwareAG-USA.com>
Date: Thu, 27 Feb 2003 07:58:16 -0700
Message-ID: <9A4FC925410C024792B85198DF1E97E405173DD1@usmsg03.sagus.com>
To: www-ws-arch@w3.org

> -----Original Message-----
> From: David Orchard [mailto:dorchard@bea.com]
> Sent: Thursday, February 27, 2003 12:59 AM
> To: 'Mahan Michael (NRC/Boston)'; 'ext Cutler, Roger (RogerCutler)';
> 'Assaf Arkin'; 'Hugo Haas'
> Cc: 'David Booth'; www-ws-arch@w3.org; 'Mark Baker'
> Subject: RE: A Priori Information (Was Snapshot of Web 
> Services Glossary
> )

> My guess is that any discussion around a priori has to focus on what
> knowledge classifies as a priori, and what doesn't.

I have the same issue.  Here's what the charter says:

"The framework proposed must support the kind of extensibility actually seen
on the Web: disparity of document formats and protocols used to communicate,
mixing of XML vocabularies using XML namespaces, development of solutions in
a distributed environment without a central authority, etc. In particular,
it must support distributed extensibility, without third party agreement,
where the communicating parties do not have a priori knowledge of each

At charter/requirements time, I thought this was basically talking about
discovery, e.g. with Google and/or UDDI. Given a URI (as in the case of
Google and a RESTful read-only web service, along with some convention such
as RDDL for getting the schema/semantics associated with a URI) or a service
registry that would supply this type of information in a more
SOAP/WSDL-friendly way, a WS consumer could find and invoke a service
without some specific agreement with the supplier.  ("Third party" seems a
bit vague, but I interpret that to mean machine-machine discovery without
getting humans involved for each  relationship.)  

I don't think we should spend too much time deconstructing this as a
hard-and-fast requirement.  Clearly the W3C chartered this working group to
analyze the architecture of web services as they existed and do what we
could to leverage the Web and reconcile WS practice with Web practice.  I
think we've made some good progress along those lines.  We should make sure
that we allow what people do with the hypertext web (e.g. GET a schema, WSDL
file, RDF network, or whatever) but not insist on this type of thing because
it simply hasn't been proven to work for machine-machine communications
without getting those nasty humans involved.
Received on Thursday, 27 February 2003 09:58:50 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:41:04 UTC