W3C home > Mailing lists > Public > www-zig@w3.org > February 2002

Re: Z39.50 on the web (and in print)

From: Alan Kent <ajk@mds.rmit.edu.au>
Date: Mon, 25 Feb 2002 10:29:12 +1100
To: www-zig@w3.org
Message-ID: <20020225102912.B6249@io.mds.rmit.edu.au>
Not picking any mail in particular to respond to (wow, where did this
thread come from! :-)...

I agree with the majority of mails going around. So I thought I would
throw in in 2 cents worth.

I don't think any single protocol is the right way to go. I think there
are two separate problems: the data to be collected, and the technology
to make that data available. (I have actually come to this opinion
after listening to a Digitial Libraries seminar locally where the
real cost turned out to be in collecting the data, not the technology.
Then the data had to live beyond a single system as libraries wanted
to keep data around for 100's of years. Software does not last that
long. So I think its a good prinicple to keep the data and its format
separate from the technology using it.)

For the data to be collected, I see something like an XML schema 
describing targets (IP:port, title, description) being a good thing.
Maybe even dublin core with an extra element or two (or EAD or
whatever). Extending and existing format means less arguing!
The only extra information surely over other types of metadata
is the Z39.50 host and port.

Then there is the issue of how to author and distribute it. I think
this will change over time. I don't think Explain is the solution.
(I don't think any single technology is the solution actually.)
I think a crawler might be able to build up the Z39.50 metadata
record by looking into Explain of a server, but that is a separate
issue. (No Z39.50 Australian site I have found so far supports Explain
by the way.) OAI, UDDI, Z39.50, etc are are possible ways to send the
data around. I personally don't think this will be solved quickly.
I think people we end up doing what they are able to do (or have
funding to do etc).

So I think the first step is to define the XML structure for capturing
the information. Then people can do lots of technology experiments
without having to recollect the data each time. For example, if
IndexData put it's list of sites up as an XML document accessible
via HTTP, it would be wonderful for me. I can then easily automatically
grab a copy, look for the *.au sites, and build the list of Australian
sites that I wanted. I probably wont bother with the current HTML files
because there is no guarantee that they won't change in format.

How to manage and keep the data up to date I think should be a
separate thing as, I suspect, it will take much longer to agree on.
(I am thinking about union cataloging etc - surely all the issues
and politics are quite similar, although the scale is smaller).

Received on Sunday, 24 February 2002 18:29:51 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:26:03 UTC