Meeting minutes 2016-09-02

Minutes are here:

https://www.w3.org/2016/09/02-annotation-minutes.html

Text version below

Ivan

----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704



   [1]W3C

      [1] http://www.w3.org/

              Web Annotation Working Group Teleconference

02 Sep 2016

   [2]Agenda

      [2] http://www.w3.org/mid/05b001d204a2$1334dbd0$399e9370$@illinois.edu

   See also: [3]IRC log

      [3] http://www.w3.org/2016/09/02-annotation-irc

Attendees

   Present
          Rob Sanderson (azaroth), Dan Whaley, Tim Cole, Ben De
          Meester (bjdmeest), Jacob Jett, Ivan herman, ShaneM,
          Takeshi Kanai, Nick Stenning, Randall Leeds (tilgovi)

   Regrets
          TB_Dinesh

   Chair
          Tim, Rob

   Scribe
          bjdmeest

Contents

     * [4]Topics
         1. [5]Minutes
         2. [6]CRs update
         3. [7]Extension request
         4. [8]Testing
     * [9]Summary of Action Items
     * [10]Summary of Resolutions
     __________________________________________________________

   <azaroth> trackbot, start meeting

   <trackbot> Meeting: Web Annotation Working Group Teleconference

   <trackbot> Date: 02 September 2016

   <azaroth> Chair: Tim_Cole, Rob_Sanderson

   <TimCole> Meeting: Web Annotation Working Group Teleconference

   <trackbot> Sorry, ivan, I don't understand 'trackbot does all
   the rest for you'. Please refer to
   <[11]http://www.w3.org/2005/06/tracker/irc> for help.

     [11] http://www.w3.org/2005/06/tracker/irc

   <ShaneM> working on it

   <trackbot> Sorry, dwhly, I don't understand 'trackbot, get
   coffee'. Please refer to
   <[12]http://www.w3.org/2005/06/tracker/irc> for help.

     [12] http://www.w3.org/2005/06/tracker/irc

   <ivan> scribenick: bjdmeest

   TimCole: Let's get started
   ... first, we'll talk about the exit criteria of CR
   ... then, about extending the WG to get through CR, PR..
   ... then, we'll talk about testing
   ... other topics?

   <TimCole> PROPOSED RESOLUTION: Minutes of the previous call are
   approved:
   [13]https://www.w3.org/2016/08/26-annotation-minutes.html

     [13] https://www.w3.org/2016/08/26-annotation-minutes.html

Minutes

   <azaroth> +1

   <ivan> +1

   <TimCole> +1

   <Jacob> +1

   +1

   <takeshi> +1

   RESOLUTION: Minutes of the previous call are approved:
   [14]https://www.w3.org/2016/08/26-annotation-minutes.html

     [14] https://www.w3.org/2016/08/26-annotation-minutes.html

CRs update

   azaroth: we had a request
   ... we should publish the exit criteria
   ... that's required
   ... we have done that
   ... there are new versions of the 3 specs (each with an
   appendix about the exit criteria)
   ... implementations of the model, implementations that the
   vocabulary is internally consistent and can be used to go from
   json-ld to json
   ... for the protocol, 2 implementations of all the interactions
   ... retrieving an annotation, deleting, etc...
   ... they will be republished on 6th of September

   ivan: we also wanted to link to the test cases themselves, but
   they are not clearly available yet
   ... everything is done, the publications are checked, they will
   be published on Tuesday
   ... that's that for CR

Extension request

   TimCole: we are trying to do an extension request to extend the
   WG to get through CR and PR

   ivan: I gave Ralph(?) an overview
   ... we hope to be able to cover all the exit criteria by the
   end of October
   ... that's one month extra
   ... that + the problem of Christmas in the middle
   ... my pessimistic deadline would be to publish the
   recommendation by the end of January, so I asked to extend
   until the end of February
   ... hopefully, we will get it
   ... in any case, the more we can show as readiness, the better
   ... we should get initial implementation reports on our pages
   ... they don't need to be complete
   ... but at the moment, the reports are placeholders
   ... if we have (partially) tested implementations (e.g., Rob's,
   Benjamin's)
   ... showing them is critical
   ... ideally by next week, realistically by the week after

   TimCole: test reports will show, preferably next week

   ivan: they will look at those test reports, as they are in the
   CR documents

   ShaneM: about results: I can now merge to the repo

   <TimCole> [15]https://github.com/w3c/test-results/pulls

     [15] https://github.com/w3c/test-results/pulls

   ShaneM: I will push results for our implementation, right now

   <TimCole>
   [16]https://github.com/w3c/test-results/tree/gh-pages/annotatio
   n-model

     [16] https://github.com/w3c/test-results/tree/gh-pages/annotation-model

   TimCole: there's a W3C test results repo on github
   ... there's a small typo: for ==> fork
   ... There's an open pull request

Testing

   TimCole: Model testing:
   ... we have about 100 assertions covering body, target, ..
   ... I need to add a separate folder for specificResource
   ... those are in the test-dev repository
   ... you can now use those tests
   ... you go to the w3c test site
   ... you input annotations
   ... you get reports
   ... those reports, you can add using a pull request to the
   test-results repo

   ivan: what ends up in the test-results/implementation reports
   are a set of json files?

   <TimCole> [17]https://github.com/spec-ops/wptreport

     [17] https://github.com/spec-ops/wptreport

   ShaneM: that and a report

   TimCole: the current report doens't mention the implementation,
   you do know who did the pull request

   <ShaneM> CH53.json

   ShaneM: as a convention, tests name the file as the name of the
   implementation and the version
   ... I jusked as that to the current pull request

   ivan: all implementers we currently have, should get some kind
   of name?

   Shane: whatever name that makes sense is fine
   ... I'll modify the instructions so that is clear

   TimCole: the downloadable portion of the generator requires two
   characters and two numbers for the file.json

   ShaneM: apparently yes

   <Zakim> azaroth, you wanted to discuss names

   azaroth: is it possible to have additionale information about
   the things with names?
   ... e.g. a link for every implementation? a registry?

   ShaneM: we can put that in the readme

   TimeCole: The pull requester could add extra files, no? Then we
   could tell them what we want extra

   ivan: does the report make an automatic count, i.e., how many
   implementations per test, for the CR, or do we have to create
   that afterwards?

   ShaneM: it creates as separate report
   ... if we want to make changes we can, but I don't want to
   change the environment too much
   ... there are other players in the field

   TimCole: we have about 45 assertions that we expect every
   annotation to pass, the MUSTs
   ... and then we have about 100, which are designed to catch
   optionals
   ... so, if someone only implements an optional body, and a
   simple target, it seems as if they fail a lot of tests (the
   optional target tests)
   ... can we catch that some way, explain that to people, that
   they don't 'fail' as much as it seems?

   ShaneM: this is a meta-conversation about what to do about
   optional features

   <azaroth> +1 to that reduction

   TimCole: I reduced the tests a bit, e.g. for text direction, it
   doesn't depend on which type of body, so that helps a bit

   ivan: how do we do the testing and reporting on the vocabulary?

   ShaneM: by hand

   <TimCole> for example, we may not decide to consider each kind
   of selector a separate feature requiring testing, this would
   reduce the number of tests.

   ShaneM: we take a template that looks like the current report,
   and fill in the rows

   ivan: we need to decide which validation tools we use
   ... for RDF vs JSON

   azaroth: there are tools, the Python RDFlib, and the JSON-LD
   tool from digital bazaar

   ivan: what would be the other independent toolset?
   ... what's the situation with json-ld tools?

   azaroth: it has implementations in most languages
   ... ruby is pretty good, also for RDF

   ivan: maybe we can ask greg? from json-ld POV, he would be a
   logical choice

   azaroth: what about javascript-based?

   ivan: RubenVerborgh has a lot of JavaScript tools
   ... if he could run those few tests, via his toolkit
   ... then we have 3 mature toolsets
   ... azaroth, can u ask greg?

   azaroth: yes

   ShaneM: I don't care about how you would give them, we just
   need to input them into the html file

   <TimCole> [18]http://w3c-test.org/tools/runner/index.html

     [18] http://w3c-test.org/tools/runner/index.html

   ShaneM: we need implementations for testing the annotation
   model

   TimCole: two parts of the question
   ... could you generate annotations conforming to the annotation
   model
   ... if so, could you input those json-ld in the test runner,
   generate the json file test results, and do the pull request?

   nickstenn: I'm not sure our client will spit out the correct
   JSON-LD in the near future
   ... but our server could render them as JSON-LD
   ... I'm very happy to test those using the test runner

   tilgovi: if it's important to have client-side javascript that
   generates conforming json

   TimCole: you have to do one annotation at a time

   tilgovi: ... I'll have a look at that

   <ShaneM> Updated result reporting instructions at
   [19]https://github.com/w3c/test-results/tree/gh-pages/annotatio
   n-model and
   [20]https://github.com/w3c/test-results/tree/gh-pages/annotatio
   n-protocol

     [19] https://github.com/w3c/test-results/tree/gh-pages/annotation-model
     [20] https://github.com/w3c/test-results/tree/gh-pages/annotation-protocol

   TimCole: it's important to have test results published

   bigbluehat: about protocol testing: it's about exercising a
   server, and exercising a client
   ... there's a pull request pending
   ... there is one test, you give it the url to your annotation
   server, and a url to one annotation in that server

   ShaneM: I've only ever run that against the basic python server
   ... https is a should, and the python server doesn't implement
   that
   ... about client-side protocol testing
   ... there are basically no requirements
   ... I found one about sending a pref header for a certain use
   case, but that doens't really have anything to do with the
   client

   azaroth: because HTTP doesn't require a specific format, and we
   don't extend HTTP, there are no testable assertions for the
   client

   ShaneM: I would like to either have someone test against a
   server, or give me links to a server, and I'll run the tests

   ivan: so we need to reach out to the various implementers, such
   as Europeana

   azaroth: they have one, after a slight update
   ... it would take some time to have it up and running somewhere
   accessible

   <ShaneM>
   [21]http://testdev.spec-ops.io:8000/tools/runner/index.html?pat
   h=/annotation-protocol

     [21] http://testdev.spec-ops.io:8000/tools/runner/index.html?path=/annotation-protocol

   ShaneM: you can do it yourself, they're in test-dev right now

   <ivan> adjourned

   <TimCole> Adjourn

   TimCole: hopefully, by next week, we have some reports, and
   more specifics about the vocabulary testing

   <ivan> trackbot, end telcon

Summary of Action Items

Summary of Resolutions

    1. [22]Minutes of the previous call are approved:
       https://www.w3.org/2016/08/26-annotation-minutes.html

   [End of minutes]
     __________________________________________________________


    Minutes formatted by David Booth's [23]scribe.perl version
    1.144 ([24]CVS log)
    $Date: 2016/09/02 16:01:47 $

     [23] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
     [24] http://dev.w3.org/cvsweb/2002/scribe/

Received on Friday, 2 September 2016 16:03:58 UTC