Re: Z39.50 on the web (and in print)

Sebastian 

I agree - I can see a business case like you outline for several different
types of organizations including ILS vendors (like us) to have such a
resource - both as a service we could offer our customers and as a value
add service we could use to enhance the data we provide to customers. You
can imagine a situation where we offered a service that basically says
send us your query and we will figure out the best places to send it to to
get you the appropriate results.

But for this to work and be interoperable we definitely need (as you
suggest) a good structure for defining resources - and one that contains a
rich enough set of data elements to describe that resource (beyond ip
address, port and database name) - so we know where to go search for
different types of data

I probably was at that meeting you refer to - but perhaps the lack of
oxygen got to me because I dont recall the discussion you refer to

mark


On Fri, 22 Feb 2002, Sebastian Hammer wrote:

> Ok,
> 
> >I would be interested in yhearing your reservations - my own thoughts on
> >this service is that its not a bad idea but also not terribly useful
> 
> To recap for those who weren't there, the discussion began like this... 
> It's too bad that there's no easy way to find useful servers, and 
> information about them, other than the couple of semi-manually maintained 
> lists offered by different companies. Explain 
> (classic/lite/cherry/whatever) will theoretically provide useful metadata 
> about a server, including its search fields, content description, etc.. but 
> how do we *find* the servers? Why can't we go to Google and find the right 
> server to answer our query, for instance. The answer was: Google and 
> virtually any other automatic web-harvester works by crawling around from 
> focument to document, following the hyperlinks to dicsover new resources. 
> Why can't we do that for Z39.50 servers? Because they don't point at 
> eachother. And then the big question: Well, why don't we *make* them point 
> at eachother. Thus was born the "Friends & Neighbours service", which would 
> allow any Z39.50 server, on request, to return a list of other targets 
> relevant to it, sympathetic to it, or just plain known to it. 
> Theoretically, at least, it would not be necessary to provide anything 
> other than host/port coordinates for your friends and neighbours, because 
> they would be able to provide their own, up-to-date metadata about 
> themselves using Explain (or whatever).
> 
> A lot of us actually got really enthusiastic about this, and there was an 
> almost giddy atmosphere in the room at the time (although that may also 
> have been due to oxygen deprivation on the last day of the meeting).
> 
> But after I came home from the meeting, my main misgiving popped up 
> again... the web-like simplicity of the F&N model was deceptively 
> appealing, but let's not forget that the hyperlinks connecting 
> web-documents are an intrinsic part of the web itself. It wouldn't be the 
> web without them, and it is in every document author's natural interest to 
> provide interesting, up-to-date links in his documents (well, kind of). 
> There is a natural business case for people to maintain the hyperlinks that 
> are the fodder of web-crawlers like Google.
> 
> Take the proposed F&N service, now. What is the business case for a server 
> owner to implement F&N, much less keep an up-to-date list of other servers? 
> Exactly zero. By definition, he already knows his friends and neighbours. 
> It may be that national agencies (like the LOC) might see a point in 
> offering a F&N service as part of their Z server... but surely the average 
> library or public office could care less. The result would very easily be a 
> few sparsely populated islands of servers grouped by project, consortium, 
> or software base which point to eachother -- sometimes. There'd quikly be a 
> whole host of dead links and worse, links to irrelevant servers or test 
> systems.
> 
> So... I agree with your analysis... it's not a bad idea, but not very 
> useful... and, I would contest, not worth our time to design, much less 
> implement.
> 
> So what COULD work? Well, I see at least two different sources of reliable 
> target databases with varying levels of quality. One, national bodies 
> interested in library interoperability (something for which there *is* a 
> business case) have an interest in maintaining up-to-date lists of 
> important Z39.50 servers. Second, companies like ourselves and BookWhere 
> have an interest in maintaining lists to serve our own needs, or those of 
> our clients. Three ("I see at least *three* different sources!"), consortia 
> and development projects, maybe even LIS vendors have an interest in 
> providing these lists.
> 
> So there are in fact a large number of lists to draw on. What I'd like to 
> consider is whether it would be feasible to build a structure in which 
> these lists could be merged or cross-searched... one of the key elements, 
> surely, would be a good schema for describing targets, second would be some 
> mechanism for organising a virtual union catalogue (Z39.50, LDAP, OAI are 
> readily available technologies that come to mind).
> 
> --Sebastian
> 
> At 08:20 22-02-2002 -0600, Mark Needleman - DRA wrote:
> >Sebastian
> >
> >I would be interested in yhearing your reservations - my own thoughts on
> >this service is that its not a bad idea but also not terribly useful
> >unless there is some mechanism to include enough information about the
> >neighbor servers so that the client can make some intelligent decisions
> >about whether its useful to go to them - and then the question becomes
> >what is enough inforation before we just encode all of explain in those
> >returned records
> >
> >mark
> >
> >
> >On Fri, 22 Feb 2002, Sebastian Hammer wrote:
> >
> > > At 12:43 22-02-2002 +0000, Robert Sanderson wrote:
> > >
> > > >To go back to the original idea, what is needed is actually an explain
> > > >harvester/cross searcher so there's one server to go to that can find
> > > >others based on their explain information.
> > >
> > > Mark Hinnebusch asked me this in response to my original mail, so I'll 
> > pass
> > > it on...
> > >
> > > Would this be the "friends and neighbours" service that was discussed with
> > > some enthusiasm at a meeting a year or so ago? The idea was that a server
> > > would be able to return a list of "friends and neighbours", for instance,
> > > other members of its consortium, other national servers, etc. At that 
> > time,
> > > Explain was unpopular, so the idea was to do it in an XML structure
> > > returned on the Init, as I recall.
> > >
> > > As popular as the idea seemed at the time, I have developed my own
> > > reservations about this, but what do others think?
> > >
> > > --Sebastian
> > > --
> > > Sebastian Hammer, Index Data <http://www.indexdata.dk/>
> > > Ph: +45 3341 0100, Fax: +45 3341 0101
> > >
> > >
> 
> --
> Sebastian Hammer, Index Data <http://www.indexdata.dk/>
> Ph: +45 3341 0100, Fax: +45 3341 0101
> 
> 

Received on Friday, 22 February 2002 10:04:53 UTC