W3C home > Mailing lists > Public > public-lld@w3.org > August 2010

Re: Open Library and RDF

From: Karen Coyle <kcoyle@kcoyle.net>
Date: Mon, 16 Aug 2010 10:53:28 -0700
Message-ID: <20100816105328.hivi830y8gcso0k0@kcoyle.net>
To: Thomas Baker <tbaker@tbaker.de>
Cc: public-lld@w3.org
Quoting Thomas Baker <tbaker@tbaker.de>:


> I'm not convinced that the missing counterpart from the
> modeling world need come to the task with deep knowledge about
> minutiae.


NO, of course not, not the minutiae, but the general principles and  
some of the common vocabulary. Believe me, the minutiae are quite  
minute! So one should understand what "work" means in library terms,  
the principles of authority and authority control, content and  
carrier, and many others. Unfortunately, I don't know of a text that  
lays these all out neatly, but what we can do is keep adding useful  
readings to our wiki page.


> In one corner, catalogers, with a deep understanding of their
> conceptual models, but little or no training in Semantic Web
> modeling per se, little understanding of software development,
> and little budget.  In the other corner, systems people,
> considerably younger in average age and experience, often
> oriented heavily to APIs and to ad-hoc data models for solving
> a problem at hand, likewise with little training in Semantic
> Web modeling, and with little motivation to make extra work
> for themselves by pushing the issue of data interoperability,
> beyond the task at hand, on their own initiative.  I have
> the impression that there are precious few "data modelers"
> (in a Semantic Web sense) involved in the process at all.
> And I'm not getting the sense that the catalogers are
> articulating a strong requirement for interoperability on a
> Semantic Web basis.  Result: requirements are defined for a
> data silo, and programmers deliver a silo.

You left out a corner, and possibly the most important one: the  
library systems market. Libraries do not create software for  
cataloging -- that is provided by library systems, from the ginormous  
(OCLC) to the tiny (PC programs for small libraries). Most libraries  
do not have programmers (some don't even have systems people). The  
libraries that do have programmers tend to be academic and research  
libraries, but even so those programmers do not create or modify the  
cataloging systems. Those systems are supplied by library vendors.  
Those vendors run on a very tight margin (libraries are hardly big  
spenders, and most are having their budgets cut on a yearly basis).  
They will only create systems that can satisfy a large percentage of  
their customers, and therefore the dependence on STANDARDS. (In fact,  
some countries are at this moment moving to MARC as their data  
standard because it allows them to purchase systems more easily, this  
being the predominant standard.) So, even should library catalogers  
and library systems thinkers come to an agreement about new  
directions, the actual implementation needs to involve the systems  
providers. These latter have to mind their bottom line and are not  
particularly interested in experimentation. They also have a large  
installed base that does not have the $$ to upgrade in a timely  
manner. (Every ten years or so is A LOT for significant changes, and  
some poorer libraries hang on to their system versions for even  
longer.) So the whole network of library metadata has some heavy  
built-in enertia. Systems vendors will not make a move until they have  
some assurance of return on investment.

Some of us have talked about a possible workflow that would allow  
libraries to at least experiment with linked data in the near future.  
We have to assume that for some time libraries will continue to create  
MARC (or MARC-like) data, and that their systems will only be able to  
import and export such data. Thus you see the approach taken by XC  
(http://extensiblecatalog.org) of managing data outside of the library  
system. That only takes us so far, however, because MARC lacks much of  
what is needed for linked data (identifiers, predicates, etc.). So we  
need another step that serves to enhance the data so that we can apply  
LD principles and functionality. The next step would be to provide  
ways to easily share these enhancements as widely and openly as  
possible, so that libraries can begin to reap some of the benefits of  
linked data.

One possible scenario then for the mid-to-long-term would be that we  
find a way to keep library systems stable (since they are needed to  
manage the libraries) by re-defining library metadata into management  
data and discovery data. This is just a vague idea in my mind at the  
moment and may be entirely untenable, but it is probably worth  
thinking through.

kc

>
> Discuss... :-)
>
> Tom
>
> --
> Thomas Baker <tbaker@tbaker.de>
>
>
>



-- 
Karen Coyle
kcoyle@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet
Received on Monday, 16 August 2010 17:54:02 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:27:37 UTC