Re: Open Library and RDF

My question is whether the FR and RDA process is considering

> that some of the desired precision might be defined not in
> the underlying vocabularies, but in application profiles that
> use those vocabularies.  An approach which pushes some of the
> precision into application profiles could provide flexibility
> without sacrificing rigor.  Are application profiles (possibly
> under a different name) an important part of the discussion?
>
>
>From what I've heard this week, I think they are.

Cataloguing is built on a broader set of constraints than what can be
embedded in ontology semantics. It is a layered model that includes general
cataloguing principles, content rules like ISBD and AACR, guidelines that
explain (sometimes on a national level) how to apply the rules, vocabularies
and element sets, and data structures such as MARC.

Karen, maybe your page at [1] could be improved to explain how these
standards relate one with the other in this layered model ; I'm willing to
help if it's OK for you.

In this landscape, the FR** models family, and of course RDA, have a
different status because there is no legacy data that corresponds to them.
That's why we call them "untested", I guess.

>From what I learnt here at IFLA, we (this is a general "librarians" we) feel
reluctant to apply standard structures to data that has not been created
according to the corresponding rules. For instance, at BnF we can use the
RDA vocabulary [2] to express bibliographic data in RDF, but since the
source data has been catalogued following ISBD rules and not RDA rules, it
will create inconsistencies in the data.
There is probably a need for an intermediate standard which could ensure the
transfer of legacy data in the RDF world without provoking these
inconsistencies (thanks Gordon for enlightening me on that). Then also a
need to define the new set of standards/rules/guidelines that will ensure
the same level of quality in the Linked Data world.

Also, from my (short) experience with ingesting RDF data in library systems,
we have a need to control what exactly is in the data, that goes beyond
checking the logic. Even if we acknowledge the end of the "record" paradigm,
there will always be a level (call it graph or whatever) where we will want
to check in a very detailed way what information has been provided regarding
a specific resource.

Coming back to our discussion, providing quality control methods for
ensuring a quality level of library data equivalent of what we have today
seems an important use case to me. These methods could be application
profiles or maybe others.
Promoting the uptake of library standards within the wider Web community is
another use case, no less important, but different.
Maybe the technology pieces that we need to achieve both use cases are
different (hence the approach with creating unbounded super-class/property
versions of our models, Gordon has mentionned).

Emmanuelle

[1] http://www.w3.org/2005/Incubator/lld/wiki/Library_Data_Resources
[2] http://metadataregistry.org/rdabrowse.htm





-- 
=====
Emmanuelle Bermès - http://www.bnf.fr
Manue - http://www.figoblog.org

Received on Monday, 16 August 2010 05:53:42 UTC