Re: Open Library and RDF

Quoting Emmanuelle Bermes <>:

> Karen, maybe your page at [1] could be improved to explain how these
> standards relate one with the other in this layered model ; I'm willing to
> help if it's OK for you.

Please do! The fact is that here in the US we don't pay nearly as much  
attention to the international rules as others seem to, and I do not  
feel competent to explain them well in relation to other library rules.

When we were discussing a layered model, I was thinking of layering  
beyond library data, not within the library model, so now I must think  
more about the latter. Thanks for bringing that up.

Also, we might want to add some documents that are commentaries on the  
source documents there. For example, there is the short paper on FRBR  
that Barbara Tillett did, which might help people understand the  
library approach to E-R.

> In this landscape, the FR** models family, and of course RDA, have a
> different status because there is no legacy data that corresponds to them.
> That's why we call them "untested", I guess.

This is just my personal interest, but there is an intriguing couple  
of paragraphs in Ghilli and Guerrini's "Introduzione a FRBR" that  
follows through from Panizzi to Cutter to Lubetzsky, and gives a good  
background for the concept of Work. It cites a number of other works  
that I should try to find in the University library... it would be  
good to understand how FRBR has its roots in earlier cataloging  

Always more to learn :-)


>> From what I learnt here at IFLA, we (this is a general "librarians" we) feel
> reluctant to apply standard structures to data that has not been created
> according to the corresponding rules. For instance, at BnF we can use the
> RDA vocabulary [2] to express bibliographic data in RDF, but since the
> source data has been catalogued following ISBD rules and not RDA rules, it
> will create inconsistencies in the data.
> There is probably a need for an intermediate standard which could ensure the
> transfer of legacy data in the RDF world without provoking these
> inconsistencies (thanks Gordon for enlightening me on that). Then also a
> need to define the new set of standards/rules/guidelines that will ensure
> the same level of quality in the Linked Data world.
> Also, from my (short) experience with ingesting RDF data in library systems,
> we have a need to control what exactly is in the data, that goes beyond
> checking the logic. Even if we acknowledge the end of the "record" paradigm,
> there will always be a level (call it graph or whatever) where we will want
> to check in a very detailed way what information has been provided regarding
> a specific resource.
> Coming back to our discussion, providing quality control methods for
> ensuring a quality level of library data equivalent of what we have today
> seems an important use case to me. These methods could be application
> profiles or maybe others.
> Promoting the uptake of library standards within the wider Web community is
> another use case, no less important, but different.
> Maybe the technology pieces that we need to achieve both use cases are
> different (hence the approach with creating unbounded super-class/property
> versions of our models, Gordon has mentionned).
> Emmanuelle
> [1]
> [2]
> --
> =====
> Emmanuelle Bermès -
> Manue -

Karen Coyle
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet

Received on Monday, 16 August 2010 15:01:03 UTC