- From: Ian Ibbotson <ian.ibbotson@fdgroup.com>
- Date: Tue, 21 Nov 2000 12:19:04 +0000
- To: www-zig@w3.org
- CC: Matthew Dovey <matthew.dovey@LAS.OX.AC.UK>, "'Sebastian Hammer'" <quinn@INDEXDATA.DK>
<unlurk> Hi all. Dumb question time Is there anywhere we can get to descriptions of what people are actually using explain for currently and what the proposed usage would be. Hand on heart, I think I (at FD) use it much more as a support/debugging tool to try and figure out why some target is behaving the way it is than as a means of populating our query-rewriting database. On the whole, that rule database becomes hand-tweaked to such an extent that it looks almost nothing like the original description of the server anyway (Assuming the server supports explain in the first place). Of course, this multi-target search situation is slightly different to a single origin-target pair (rewriting a given query to match the capabilities of the selected targets, instead of only allowing users to select valid attribute combinations). This maybe begs the question: what is the most common profile for future usage going to be (Single or multiple target searching) and are we going to try and address interoperability problems with brute force (imho Bath Profile (and for sure you shouldn't take that as the FD line on the Bath Profile)) or with a desire to support intelligent origins that are able to make some adaptive decisions based on what they know about a target and the user. Anyway, I'm just trying to get a handle on why the XML/init approach is so desirable: What would the real difference be between a lightweight XML explain record carried in init messages and an immediate initial search of the explain database using XML as a record syntax (snip explain record syntax stuff) against some "brief" category? Is is just that some people want to use XML and stuffing it in the init service is easier than adding an extra search to your origin initialization process or is there some much more subtle reasoning that I'm missing? Just out of interest, In terms of LDAP, I once had a go at changing the yaz command line client so that instead of starting it yaz-client tcp:host:port you could start it yaz-client ldap:server:DN. My idea was that of you are deploying organisation wide retrieval systems, you only want to have one place to maintain the list of what targets you have available. Trouble is, after taking the first easy step of LDAP-Enabling a z3950 client you run aground trying to create a widely applicable taxonomy of targets. Also out of interest, exactly who are we concerned about being "put off" by this "introspective" attitude? just my 2p, Ian. Matthew Dovey wrote: > > As a client developer, I am not mostly > > interested in getting > > Explain-anything info via HTTP or LDAP - my clients are all > > exceptionally > > good at Z39.50 already. It works for me. > > No, offence, Sebastian, but I think it is precisely that sort of > introspective attitude puts people off looking at Z39.50. Hence, the fact I > keep harping about positioning Z39.50 amongst the whole range of > client/server standards. > > Matthew
Received on Tuesday, 21 November 2000 07:23:27 UTC