- From: Dan Brickley <danbri@w3.org>
- Date: Tue, 19 Nov 2002 12:07:45 -0500 (EST)
- To: Mark Baker <distobj@acm.org>
- cc: <www-ws@w3.org>, <mf@w3.org>
On Tue, 19 Nov 2002, Mark Baker wrote: > On Tue, Nov 19, 2002 at 10:18:44AM -0500, Dan Brickley wrote: > > Regardless of those details, we still have a huge opportunity here for > > better characterising the facilities offered by these services. Think of > > the number of Web sites (olde style HTML based sites, even) that offer > > ZipCode/Postcode based searches, or ISBN-based lookups, or that can be > > keyed into by airport codes, stocktickers or other predictable datatypes, > > value sets etc. > > > > My interest is in cataloguing these services, regardless of the mechanics > > of interaction with them. I suspect that doing so will help make the case > > for careful use of HTTP GET. Once it is easier to find such services (or > > 'web sites', as we used to call them) it'll be easier to mechanically > > generate links into them, which in turn might encourage deployment of > > GETable interfaces. > > Do you have an example? Sure. http://www.musicbrainz.org/showalbum.html?albumid=575 is a view into a web site/service that returns an HTML description of the tracklisting for a particular album by some recording artist. Somewhere on musicbrainz.org there's an HTTP/URI/RDF web service that does the same thing for a machine audience (sorry, don't have URI handy). It wouldn't be hard to set up a SOAP/GET or SOAP/POST view into the same dataset. If I did this with GET, eg. http://example.com/showalbum.soap.xml?ablumid=575 it'd be trivially easy for tools and services elsewhere in the Web to generate URIs into my service that point straight into the lookup of that item. If I used POST, it'd be significantly harder for other Web content and services to reference that record within my SOAP service. > > ps. do you know any URIs for example services that use the SOAP GET > > support? > > The only one I know of is this; > > http://soap.4s4c.com/registration/rounds/ > http://soap.4s4c.com/registration/tests/ > http://soap.4s4c.com/registration/toolkits/ Thanks, that's handy. Rather nice in fact; these are hypertext documents: http://soap.4s4c.com/registration/tests/ [[ <?xml version='1.0'?> <s:Envelope xmlns:s='http://www.w3.org/2002/06/soap-envelope'><s:Body> <q:Tests xmlns:q='http://www.pocketsoap.com/registration/12/' xmlns:x='http://www.w3.org/1999/xlink'> <q:Test x:href='b0f31a20-6146-462a-b3bd-6f3628a96683/' x:title='Round 2 Base' /> <q:Test x:href='dd1e5b65-d8eb-4fd8-83e9-b3b40141bcd4/' x:title='Round 2 Group B' /> <q:Test x:href='172bd219-6c6a-410c-8160-9118fe73c52e/' x:title='Round 2 Group C' /> <q:Test x:href='6c1160c4-9d73-4c21-b745-7c4dcef13bbc/' x:title='Round 3 Group D Compound 1' /> <q:Test x:href='0604a15f-2344-4d85-a1c6-fd25e03df4f0/' x:title='Round 3 Group D Compound 2' /> <!-- .... --> </q:Tests> </s:Body> </s:Envelope> ]] Expanding and dereferencing these xlinks, gives us pointers back into the service/site such as http://soap.4s4c.com/registration/tests/b0f31a20-6146-462a-b3bd-6f3628a96683/ [[ <s:Envelope xmlns:s='http://www.w3.org/2002/06/soap-envelope'> <s:Body> <q:Test xmlns:q='http://www.pocketsoap.com/registration/12/' xmlns:x='http://www.w3.org/1999/xlink'> <q:Name>Round 2 Base</q:Name> <q:Description>SOAP testing, rpc/encoded echo* tests for base types, arrays and structs</q:Description> <q:Wsdl x:href='http://www.whitemesa.com/interop/InteropTest.wsdl'/> <q:Documentation x:href='http://www.whitemesa.com/interop.htm'/> <q:ClientResults x:href='clientresults/'/> <q:Implementations x:href='implementations/'/> </q:Test></s:Body></s:Envelope> ]] In turn, following one of these links, http://soap.4s4c.com/registration/tests/b0f31a20-6146-462a-b3bd-6f3628a96683/implementations/ [[ <?xml version='1.0'?> <s:Envelope xmlns:s='http://www.w3.org/2002/06/soap-envelope'><s:Body> <q:Implementations xmlns:q='http://www.pocketsoap.com/registration/12/' xmlns:x='http://www.w3.org/1999/xlink'> <q:Implementation x:href='bc6ba864-7849-49ab-8dbe-be8509267305'/> <q:Implementation x:href='3a287757-5e57-4f6c-9db2-9514ac28bd56'/> <q:Implementation x:href='801ed897-91cc-470c-960a-89fab2440772'/> <!-- ... --> </q:Implementations></s:Body></s:Envelope> ]] ...and taking for example one of these links: http://soap.4s4c.com/registration/tests/b0f31a20-6146-462a-b3bd-6f3628a96683/implementations/3a287757-5e57-4f6c-9db2-9514ac28bd56 we get: [[ <?xml version='1.0'?> <s:Envelope xmlns:s='http://www.w3.org/2002/06/soap-envelope'><s:Body> <q:Implementation xmlns:q='http://www.pocketsoap.com/registration/12/' xmlns:x='http://www.w3.org/1999/xlink'> <q:Toolkit x:href='/registration/toolkits/8c402870-e13a-4c6c-9089-b73d395e6bb1'/> <q:SoapEndpoint x:href='http://soap.4s4c.com/ilab2/soap.asp'/> <q:Wsdl x:href='http://soap.4s4c.com/ilab2/ilab.wsdl'/> <q:TestedBy> </q:TestedBy> </q:Implementation> </s:Body></s:Envelope> ]] It's interesting to look at my HTTP-GET traversal of the links between these 4 documents (each of which is a simple lookup / retrieval along lines of my original question here). You can think of it either in terms of a Web Service client making repeated requests of a remote database, or in terms of a classic Web client browsing around an interlinked set of documents. Same thing, different views. A few observations: * on my Windows2000 laptop, browsing with Opera, IE5.5 and Mozilla 1.2 I had to make constant use of 'view source' and external helper apps (text editors!) to navigate between these documents. This feels like a retrograde step: following three links took me several minutes instead of several seconds. * It shows that there is no crisp distinction between the 'machine web' and the 'human web'. Both machines and people could make use of documents that provide the information we see above. * There's no obvious way (since PIs are forbidden) to exploit desktop support for HTML, hyperlinks, and XSLT via a stylesheet reference. Even forgetting end users, this seems regrettable from a deployment point of view. As a Web Service developer, I would _love_ to be able to help debug the paths through services such as those above by clicking on links in a browser. Or running an old-fashioned linkchecker, for that matter. I might want a fancy IDE as well, but having the ability to use XSLT'd views of the SOAP-GETtable content seems a useful way for folk to get to grips with deploying SOAP services. * describing the legitimate paths through an information retrieval based Web Service can be seen as a variant on an old theme. State diagrams for describing Web Service interactions are just sitemaps, showing paths through a Web of linked documents. At least for the case when the lookups each have distinct GETable URIs. (ok, I maybe exagerrating this point. but the sitemaps analogy feels close to some truth) > > It'd be interesting to experiment with use of a stylesheet PI in > > these, for linking to XSLT-based UI for web services. I understand SOAP > > currently discourages this, but I'd still be interested in seeing whether > > its a useful technique... > > I think it would be hugely useful, but the WG already rejected the > requests (including mine) to reintroduce PIs for this purpose. Yup, though a lot of folk beyond XMLP dislike PIs and the use of PIs as a stylesheet referencing mechanism. If we can't use stylesheet PIs, it'd be good to have another agreed way of doing it (for Web Services and in general). Another option might be for the SOAP spec to tolerate PIs for the case of stylesheet references, since there is clientside browser support for that technique of providing stylesheet location information and it'd help with making SOAP services easy to debug, test etc. Maybe someone might prototype a demo based on the examples above...? cheers, Dan -- mailto:danbri@w3.org http://www.w3.org/People/DanBri/
Received on Tuesday, 19 November 2002 12:07:52 UTC