Re: Library data diagram

Karen
 
I think members of the LLD should let us know if they want explanations and
interpretations of the ICP stuff. It underpins most of the recent library models
(FR family, ISBD, RDA). We (the bibliographic metadata experts) can easily come
up with real-world scenarios for any of the functional requirements/goals, so:
 
Question for LLD: do you want a short presentation on this stuff at the f2f? Or
something on the wiki over the next couple of weeks? Or both?
 
And I agree that some of this could be used for use cases; the scenarios I'm
envisaging are along the lines of "Bob has identified a specific edition (i.e.
expression) of a work that he wants to use, and now wants to find out if there
is a version (i.e. manifestation) that he can use on his laptop, or, failing
that, a large-print version. Bob has difficulty reading small print (but his
laptop has screen magnification capabilities)." In other words, Bob needs to
find and identify "all resources embodying the same expression".
 
However, that's essentially a use case for the ICP itself, and translating it
into something relevant to linked-data (other than a generic observation that
linked-data helps) may be difficult.
 
Final point on the specific example of controlled access points (because the
specific should illuminate the more general issues): the draft representation of
FRAD in the Open Metadata Registry gives Controlled Access Point (CAP) as a
class. The class is used as the domain and/or range for a set of properties
based on relationships and attributes in the FRAD entity-relationship model. For
example, the property "is based on (name)" has domain CAP and range Name, so we
can infer from an instance triple "X is based on (name) Y" that X is a CAP. Note
that X and Y may have the same label (that is, the Name Y does not require any
addition to its label to make it a CAP); the distinction is made on the
assumption that some Agency (another FRAD class) has applied a specific Rule
(another FRAD class) to the Name in order to generate the CAP.
 
This leads on to the general issue of constrained versus unconstrained
properties. The FRBR Review Group recognises the value of declaring RDF
properties that are constrained by domains and ranges (and further by property
typing, sub-properties, etc.); they allow strong inferencing in a linked-data
environment where the focus is on instance triples rather than the bibliographic
records (context) from which they may be derived. On the other hand, the Group
has agreed in principle to declare the same properties without constraints
(effectively as super-properties, albeit in a different namespace) to meet the
expressed needs of other communities. But we cannot automatically infer that the
subject of an unconstrained "is based on (name)" triple is a CAP (or indeed that
the object is a Name). Instead, I guess, other related instance triples would
have to be invoked to determine the inference.
 
This looks to me somewhat similar (at the paradigm level) to the differences
between pre- and post-coordinate indexing procedures (pre-coordinate indexing
uses a set of indexing terms determined in advance of the indexing process;
post-coordinate indexing uses the terms found in the document being indexed).
That is, constrained RDF properties are a form of pre-coordination, and
unconstrained properties post-coordination. Pre-coordination requires expertise,
but results in rich, high-quality metadata; post-coordination is much cheaper
(mostly automatic), but the metadata is less useful in a wider context. For
example, the post-coordinate "china" may ambiguously refer to the place, the
material, the product, the given name (think Slick/Kantner), etc.; the
pre-coordinate "China (place)", "china (ceramic)", china (crockery)" etc. are
much less ambiguous.
 
Which brings us back to the difference between a CAP (pre-coordinate) and a Name
(post-coordinate) - but that is really just a coincidence - the point is the
similarity at the paradigm level, not the specific level of Karen's example.
 
Cheers
 
Gordon
 
 

On 02 September 2010 at 18:53 Karen Coyle <kcoyle@kcoyle.net> wrote:

> Thank you, Gordon. Yes, the ICP does describe the primary goals of 
> library data, e.g.
>
> 4.1.1. to find a single resource
> 4.1.2. to find sets of resources representing
> all resources belonging to the same work
> all resources embodying the same expression
> all resources exemplifying the same manifestation
> all resources associated with a given person, family, or corporate body
> all resources on a given subject
>
> I'm wondering if these requirements are sufficient for our purposes or 
> if they need some interpretation in order to become guidance for LLD. 
> In fact, perhaps this document (given that it is quite succinct -- 
> only 15 pages) would be a good one to discuss at the f2f as a way of 
> understanding library data requirements in general, as a lead-in to 
> LLD requirements.
>
> I'm also wondering if we couldn't use a few of these as use cases. For 
> example, how does one define a "controlled access point" in a linked 
> data environment? Is the use of a URI enough, or is more needed? Does 
> the definition of Manifestation and the means to identify it change in 
> an environment where anything can link to anything? (I'm not expecting 
> us to answer these questions -- I think that surfacing the relevant 
> questions would be quite enough.)
>
> kc
>
> Quoting "gordon@gordondunsire.com" <gordon@gordondunsire.com>:
>
> > Emmanuelle, Karen and others
> >  
> > I think we do have a good set of stated functional requirements that are
> > generally applicable to library catalogues and metadata: the Statement of
> > international cataloguing principles (ICP) developed by the IFLA Cataloguing
> > Section and IFLA Meetings of Experts on an International Cataloguing 
> >  Code. It is
> > available in English at:
> >  
> > http://www.ifla.org/files/cataloguing/icp/icp_2009-en.pdf
> >  
> > Translations into 24 other languages are available at:
> >  
> > http://www.ifla.org/en/publications/statement-of-international-cataloguing-principles
> >  
> > ICP is an update of the Paris principles from the early 1960s, which in turn
> > were based on Cutter and others. ICP takes into account FRBR, FRAD, etc. and
> > covers the range of things mentioned by Emmanuelle. ICP is "intended to
> > guide
> > the development of cataloguing codes" and the principles are thus   
> > quite general,
> > but I guess there's sufficient detail to inform discussions on the   
> > DCMI AP and
> > Singapore Framework.
> >  
> > Cheers
> >  
> > Gordon
> >  
> >
> > On 02 September 2010 at 10:08 Emmanuelle Bermes <manue.fig@gmail.com> wrote:
> >
> >> >
> >> >
> >> >  If this is done, then the metadata creation workflow in
> >> >> libraries can be seen as fitting both the Singapore Framework
> >> >> view and its implied workflow (which starts with Functional
> >> >> Requirements)
> >> >>
> >> >
> >> > and there is the rub. We still do not have a good set of stated
> >> > functional
> >> > requirements, at least that I know of. (The last good set that I've
> >> > encountered are Cutter's functional requirements from 1878 --   
> >> excellent, but
> >> > perhaps needing some revision, especially some addition of detail.) As I
> >> > recall from my long ago courses in cataloging at library school, a good
> >> > instructor pulls these concepts out of the rules and uses them   
> >> for teaching.
> >> > But I haven't seen an actual document that would summarize the functional
> >> > requirements of RDA.
> >> >
> >>
> >> I thought that a lot of the functional requirement were actually common
> >> with
> >> FRBR and FRAD : they are broad, but they do exist. Am I mistaken ? Or do we
> >> need to declare something more specific ?
> >>
> >> Then, we would need to add things like requirements for authorized access
> >> point, variant access points, statements, core elements, and other such
> >> basic concepts underlying RDA.
> >>
> >> Another lightening talk for the joint meeting in DC2010 ? If we don't start
> >> with that kind of stuff, it will be difficult to define what are the
> >> community needs in terms of patterns definition.
> >>
> >> Emmanuelle
> >>
> >>
> >> >
> >> > So that's the library landscape, as I see it, compared to the SF diagram.
> >> > I'm sure I've glossed over some important points and perhaps   
> >> mangled others.
> >> > However, any work on linked data must begin at this point and work to
> >> > move
> >> > things forward, so understanding this "state" gives us common   
> >> ground for our
> >> > work.
> >> >
> >> > I do hope others will contribute their knowledge in this area. I'm sure
> >> > my
> >> > own knowledge is incomplete.
> >> >
> >> > kc
> >> >
> >> >
> >> > Quoting Thomas Baker <tbaker@tbaker.de>:
> >> >
> >> >  Thank you, Emmanuelle, for drawing up the comparative
> >> >> diagrams [1] and thank you, Karen, for getting the ball
> >> >> rolling on discussion.
> >> >>
> >> >> A comment specifically on Singapore Framework [2]...
> >> >>
> >> >> On Tue, Aug 31, 2010 at 08:38:36PM -0700, Karen Coyle wrote:
> >> >>
> >> >>> The Singapore Framework places guidance rules outside of the flow of
> >> >>> vocabulary and DCAP development. This makes me think that in SF the
> >> >>> guidance rules are developed and applied after the other steps toward
> >> >>> an application profile have taken place.
> >> >>>
> >> >>
> >> >> As I see it, the SF diagram is intended to show how the
> >> >> components of an "application profile" relate to each other
> >> >> and to underlying foundational standards (RDF).
> >> >>
> >> >> The diagram does also suggest a workflow, and I too like to
> >> >> present it this way -- starting with Functional Requirements,
> >> >> through defining a Domain Model, specifying a Description Set
> >> >> Profile and, finally, creator a Data Format, with the designer
> >> >> "dipping down" one level to create new or cite existing domain
> >> >> models and vocabularies.  And as a rough approximation, this
> >> >> seems like a reasonable way to proceed as long as it is not
> >> >> applied too mechanically.  In the end, though, SF is meant
> >> >> to depict less a specific sequence of tasks than a picture of
> >> >> how things relate.
> >> >>
> >> >>                                          This is accurate in terms of
> >> >>> Dublin Core metadata, which was developed initially without actual
> >> >>> guidance rules.
> >> >>>
> >> >>> In Emmanuelle's diagram of library metadata, the guidance rules appear
> >> >>> to precede the vocabulary. This is accurate in terms of library
> >> >>> metadata, in which the vocabularies arise from the guidance rules.
> >> >>>
> >> >>> These two models, DC and libraries, seem to me to be the extremes of
> >> >>> the development continuum. In libraries the guidance rules are the
> >> >>> most important aspect of the metadata creation activity, and in Dublin
> >> >>> Core they can almost be considered unnecessary.
> >> >>>
> >> >>
> >> >> If for libraries, guidance rules are the point of departure for
> >> >> metadata design, then "support for guidance rules" would quite
> >> >> simply need to be defined as a key functional requirement.
> >> >>
> >> >> If this is done, then the metadata creation workflow in
> >> >> libraries can be seen as fitting both the Singapore Framework
> >> >> view and its implied workflow (which starts with Functional
> >> >> Requirements) -- only that, in this case, the functional
> >> >> requirements may be unusually heavy on existing guidance
> >> >> rules...
> >> >>
> >> >> Tom
> >> >>
> >> >> [1] http://www.w3.org/2005/Incubator/lld/wiki/File:LayeredModelV3.pdf
> >> >> [2] http://dublincore.org/documents/singapore-framework/
> >> >>
> >> >> --
> >> >> Tom Baker <tbaker@tbaker.de>
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > Karen Coyle
> >> > kcoyle@kcoyle.net http://kcoyle.net
> >> > ph: 1-510-540-7596
> >> > m: 1-510-435-8234
> >> > skype: kcoylenet
> >> >
> >> >
> >> >
>
>
>
> --
> Karen Coyle
> kcoyle@kcoyle.net http://kcoyle.net
> ph: 1-510-540-7596
> m: 1-510-435-8234
> skype: kcoylenet
>
>

Received on Friday, 3 September 2010 08:59:39 UTC