AW: Web annotations for physical texts

Hi,


FWIW, we are also looking into using CTS URIs as a means to connect annotations to works, as part of the work on our annotation platform Recogito [1] (and, likewise,  IIIF/WebAnno for images.)


CTS support isn't implemented yet. But in terms of the planning, I think we managed to get by with the WebAnno model as it is. (Using both a TextQuote and a TextPosition selector in conjunction. I assume the TextPosition selector should be robust *in principle* when dealing with proper, stable CTS-served texts. But the TextQuote selector would provide a good additional cross-checking mechanim.) Also, for XML-based texts (CTS+TEI) we are using a combination of TextQuote + RangeSelector, with XPathSelectors for start and end position.


As I said, the CTS part isn't available yet. But for the basics of our use of WebAnno, see an example here:


https://recogito.pelagios.org/document/tjrrsqn4dwmgep/part/1/edit

https://recogito.pelagios.org/document/tjrrsqn4dwmgep/downloads


The first link is the Web view. From the second link, you can download the WebAnno data. (Grab the JSON-LD link, the Turtle/XML links offer only a reduced view.)


[1] https://recogito.pelagios.org

Cheers,
Rainer




________________________________
Von: Tim Thompson <timathom@gmail.com>
Gesendet: Donnerstag, 11. Oktober 2018 00:09
An: t-cole3@illinois.edu
Cc: byoung@bigbluehat.com; gklyne@googlemail.com; sgharms@stevengharms.com; public-openannotation@w3.org
Betreff: Re: Web annotations for physical texts

Ha! Thanks, Tim. From what I recall, there weren't any extensions to the WA model needed for the project. I think that oa:TextQuoteSelector was the only SpecificResource selector used. I see that Princeton's Digital Humanities Center has launched a more ambitious Derrida project that includes image annotations, but I'm not sure to what extent they are using WA: https://derridas-margins.princeton.edu/.

--
Tim A. Thompson
Discovery Metadata Librarian
Yale University Library



On Wed, Oct 10, 2018 at 5:58 PM Cole, Timothy W <t-cole3@illinois.edu<mailto:t-cole3@illinois.edu>> wrote:
Of possible interest on this topic might be a small experiment that Tim Thompson (then at Princeton University Library, now at Yale University Library) and his colleagues did attempting to use the Web Annotations model to capture digitally the handwritten dedications (annotations) found in a collection of physical rare books at Princeton. They were particularly focused on identifying references in the bodies of the dedications to people, places and events that are represented in some way on the Web. It was only an experiment and as I recall it required some extensions to make Web Annotation work as they wanted, but if interested, see:
  https://wiki.duraspace.org/display/LD4P/Princeton+-+Derrida%27s+library
  https://github.com/pulcams/ld4p

and if you are a near a library that has a subscription to Journal of Library metadata, Thompson, et al. published an article about the project here:
https://doi.org/10.1080/19386389.2016.1258908

Thanks,
Tim Cole
University of Illinois at Urbana-Champaign


From: Benjamin Young <byoung@bigbluehat.com<mailto:byoung@bigbluehat.com>>
Sent: Wednesday, October 10, 2018 4:16 PM
To: Graham Klyne <gklyne@googlemail.com<mailto:gklyne@googlemail.com>>; sgharms@stevengharms.com<mailto:sgharms@stevengharms.com>
Cc: public-openannotation@w3.org<mailto:public-openannotation@w3.org>
Subject: Re: Web annotations for physical texts

Glad to hear this is being explored! And thank you both for adding your thoughts here. Steven, special thanks to you for kicking this off (and great post, btw!). :)

One option for books is to use URN's for the targets:
https://www.iana.org/assignments/urn-namespaces/urn-namespaces.xhtml<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.iana.org_assignments_urn-2Dnamespaces_urn-2Dnamespaces.xhtml&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=BKisInvstmeLre454YrWfH0dFILIEJT62OzcdyOzWvE&e=>

Of the ones registered there you'll find `isbn` and `issn` among several others.

The next steps (for physical things) is how you refine that target--into what Web Annotation Data Model calls a SpecificResource:
https://www.w3.org/TR/annotation-vocab/#specificresource<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.w3.org_TR_annotation-2Dvocab_-23specificresource&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=wyn-RdSf622lNSwYGZjBmh7OV-jcrholGaVa9jMywdM&e=>

Essentially, those are a target.source (the URN above), and some sort of "selector" (or locator). That's where the invention and exploration probably need the most work. :) Usually, for print, that's some combination of page number, line number, and/or exact quote (or text range, etc).

I'd be very curious to see this explored more, and hope that I and others here can help in that in some way. :)

Thanks for sharing your hopes here!
Benjamin

--

http://bigbluehat.com/<https://urldefense.proofpoint.com/v2/url?u=http-3A__bigbluehat.com_&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=i1vcJ2tH7kGn_4pKM-4ZVVameImu5dHBNDZqdzPe5nA&e=>

http://linkedin.com/in/benjaminyoung<https://urldefense.proofpoint.com/v2/url?u=http-3A__linkedin.com_in_benjaminyoung&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=Q2E4JxOSh_eO1fdSZj8dTTqMi878AvhC_ZXGqr2aJJ8&e=>

________________________________
From: Graham Klyne <gklyne@googlemail.com<mailto:gklyne@googlemail.com>>
Sent: Wednesday, October 10, 2018 5:04 PM
To: sgharms@stevengharms.com<mailto:sgharms@stevengharms.com>
Cc: public-openannotation@w3.org<mailto:public-openannotation@w3.org>
Subject: Re: Web annotations for physical texts

I'm currently doing some linked data work using Web Annotations applied to physical places, and I'm not seeing any real problems with this (just a need to be clear what a URI is referring to).  I'm actually finding them to be quite a powerful tool for capturing contextualised descriptions in linked data (with a modest additions).

#g.

On Wed, 10 Oct 2018, 21:46 Steven Harms, <sgharms@stevengharms.com<mailto:sgharms@stevengharms.com>> wrote:
Greetings,

I am interested in creating annotations on physical books [1<https://urldefense.proofpoint.com/v2/url?u=https-3A__stevengharms.com_research_semweb-2Dtopic_problem-5Fstatement_&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=I-gf2hS5jLbpw2TPKImJ9xzyw6kCkh6YCcI8tjnnmNI&e=>].

As the name "web annotations" suggests, the default target of the Web Annotation Working Group would be, of course, to annotation IRI-referable targets with IRI-identifiable Annotations.

1. Is there a model whereby we could point to a physical resource in a URI / IRI format (and thus join the existing Web Annotation universe, *or*
2. Is there a framework that might support referring to physical books that I've simply not found
3. Or should I plan to use JSON-LD to create "forge my own path?"

I hope to post an example of what #3 might look like, but I'd like to double check my understanding before engaging in in such an effort, tabula rasa.

Regards,

Steven


[1]: https://stevengharms.com/research/semweb-topic/problem_statement/<https://urldefense.proofpoint.com/v2/url?u=https-3A__stevengharms.com_research_semweb-2Dtopic_problem-5Fstatement_&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=I-gf2hS5jLbpw2TPKImJ9xzyw6kCkh6YCcI8tjnnmNI&e=>

--
Steven G. Harms
PGP: E6052DAF<https://urldefense.proofpoint.com/v2/url?u=https-3A__pgp.mit.edu_pks_lookup-3Fop-3Dget-26search-3D0x337AF45BE6052DAF&d=DwMFAg&c=OCIEmEwdEq_aNlsP4fF3gFqSN-E3mlr2t9JcDdfOZag&r=1SG5aXop0rkp_O1wnbIf7XweTLUSe9Z3aj8AqEIzm8c&m=EVNW3iuSYmx3j_7RAp0RMh4rdkIKXldktdocVtWbKG0&s=F8nzYd_i33TWUHsfqDg39OREr6mRQO5Z5ZqR7CdGBYI&e=>

Received on Thursday, 11 October 2018 07:38:33 UTC