Meeting minutes, 2016-08-12

Minutes are here, text version below

https://www.w3.org/2016/08/12-annotation-minutes.html

----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704


   [1]W3C

      [1] http://www.w3.org/

              Web Annotation Working Group Teleconference

12 Aug 2016

   [2]Agenda

      [2] https://lists.w3.org/Archives/Public/public-annotation/2016Aug/0117.html

   See also: [3]IRC log

      [3] http://www.w3.org/2016/08/12-annotation-irc

Attendees

   Present
          Rob Sanderson (azaroth), Tim Cole, Nick Stenning
          (nickstenn), Jacob Jett, Ivan Herman, Dan Whaley, Ben De
          Meester, Benjamin Young (bigbluehat), Shane McCarron

   Regrets
   Chair
          Rob Sanderson, Tim Cole

   Scribe
          Tim_Cole, nickstenn, azaroth

Contents

     * [4]Topics
         1. [5]Agenda review
         2. [6]Minutes Approval
         3. [7]Announcements?
         4. [8]Issues
         5. [9]Model Testing
         6. [10]Protocol Testing
     * [11]Summary of Action Items
     * [12]Summary of Resolutions
     __________________________________________________________

Agenda review

   <TimCole> azaroth: minutes, announcements, internationalization
   issues (brief), testing

   <TimCole> ... any other topics for today?

Minutes Approval

   <azaroth> PROPOSED RESOLUTION: Minutes of the previous call are
   approved:
   [13]https://www.w3.org/2016/08/05-annotation-minutes.html

     [13] https://www.w3.org/2016/08/05-annotation-minutes.html

   <ivan> +1

   <azaroth> +1

   <TimCole> +1

   <Jacob> +1

   RESOLUTION: Minutes of the previous call are approved:
   [14]https://www.w3.org/2016/08/05-annotation-minutes.html

     [14] https://www.w3.org/2016/08/05-annotation-minutes.html

Announcements?

   <TimCole> None

Issues

   azaroth: The discussion around I18N has continued. We now have
   six (6) open issues around this topic.
   ... To summarise:
   ... #335: WONTFIX -- long thread with social web WG
   ... last state was I18N folks were discussing with Activity
   Streams folks and would get back to us
   ... #342: A suggestion from Sergiu to add a note to say if
   dc:language is specified and processingLanguage is not, the
   latter should be assumed to be the former
   ... seems like a good editorial note
   ... Similarly with #343.
   ... which is about whether processingLanguage should be
   required to be a language in dc:language
   ... Haven't had much of a chance to look at #345, also about
   processingLanguage
   ... #341 also about processingLanguage for multilingual
   resources, and we've decided to postpone

   <ivan> Issue #345 is an attempt from Richard to close an issue
   and discussion with gsergiu via an editorial change proposal

   azaroth: suggest we spend the time on the call discussing
   testing rather than going into the details on some of these
   I18N issues
   ... any problems?

Model Testing

   TimCole: regarding the creation of the underlying schemas for
   the tests -- we've captured everything 3.1, 3.2, and 3.3
   excepting agents, and most of section 4
   ... these schemas are in the "definitions" folder
   ... and are referenced by the schemas we intend to use for
   assertions

   <TimCole>
   [15]http://testdev.spec-ops.io:8000/tools/runner/index.html

     [15] http://testdev.spec-ops.io:8000/tools/runner/index.html

   TimCole: I've just started working on the test scripts. Have
   been hashing out with ShaneM what those look like.
   ... This [^] is a test environment ShaneM has set up.
   ... You can use this with "Run tests under path" using the
   following path:

   <ShaneM>
   [16]http://testdev.spec-ops.io:8000/tools/runner/index.html?pat
   h=/annotation-model should work too

     [16] http://testdev.spec-ops.io:8000/tools/runner/index.html?path=/annotation-model

   TimCole: "/annotation-model"
   ... you can paste JSON[-LD] in and run through through the test
   suite
   ... that's all working.
   ... Having some small issues for SHOULD requirements, where
   we're not necessarily expecting you to pass the test. Currently
   if you *do* pass the requirement, the test fails.

   <ShaneM> try going to this URI now:

   <ShaneM>
   [17]http://testdev.spec-ops.io:8000/annotation-model/annotation
   s/3.1-model-musts-v3-manual.html

     [17] http://testdev.spec-ops.io:8000/annotation-model/annotations/3.1-model-musts-v3-manual.html

   <azaroth> Example annotations to play with:
   [18]https://github.com/w3c/web-annotation/tree/gh-pages/model/w
   d2/examples/correct

     [18] https://github.com/w3c/web-annotation/tree/gh-pages/model/wd2/examples/correct

   TimCole: ShaneM: [walking us through how to use the test tool
   at the link above to test annotations]

   <ShaneM> Errors: data should have required property '@context';
   expected true got false

   ShaneM: Question: do we want to suppress the output above ^

   <Zakim> azaroth, you wanted to ask about display:none in HTML ?

   azaroth: Would it be possible to have the AJV stack trace be in
   a display:none; area with a button to reveal it or similar?

   ShaneM: I don't think so. We just report data back to the test
   harness, which is responsible for the display.

   azaroth: the "data should have required property @context" is
   particularly useful to understand what's going on

   ShaneM: I'll leave it there, then.

   ivan: currently can't rerun with updated JSON

   ShaneM: I'll see if we can fix that

   <TimCole>
   [19]http://testdev.spec-ops.io:8000/annotation-model/bodiesTarg
   ets/3.2-model-manual.html

     [19] http://testdev.spec-ops.io:8000/annotation-model/bodiesTargets/3.2-model-manual.html

   TimCole: if you put in a test annotation that *passes* the
   SHOULD requirements, the output is a little harder to interpret
   ... not sure quite how to improve the output in that case

   <azaroth> I used:
   [20]https://github.com/w3c/web-annotation/blob/gh-pages/model/w
   d2/examples/correct/anno41.json

     [20] https://github.com/w3c/web-annotation/blob/gh-pages/model/wd2/examples/correct/anno41.json

   ivan: [proceeds to find a bug in one of the testing schemas
   while on the call]

   TimCole: [paraphrasing heavily] currently for the SHOULD
   assertions, we expect non-conformance, which means that if it
   actually is conformant, the test fails

   <azaroth> Pass if it's not used, with a success message that it
   SHOULD be there?

   ShaneM: no way of doing "warning" in the framework
   ... but we might be able to use "testType" to distinguish
   between MUST and SHOULD assertion types

   <azaroth> Result: Pass Message: WARNING: Format SHOULD be
   included for bodies, if known

   <ShaneM> use assertionType of must, may, or shold

   TimCole: we have other scenarios where we say "SHOULD have 1,
   MAY have more than 1 X"

   <ShaneM> testType has to do with automation.

   TimCole: there are other cases where you MUST NOT have more
   than 1
   ... so in these cases we can have multiple assertions for the
   different cardinalities

   <ShaneM>
   [21]http://testdev.spec-ops.io:8000/annotation-model/bodiesTarg
   ets/3.2-model-manual.html

     [21] http://testdev.spec-ops.io:8000/annotation-model/bodiesTargets/3.2-model-manual.html

   ShaneM: If you look at the top of [^] you'll notice that the
   page fills in as the page loads.
   ... In the description at the top of the page, there's a list
   of things the test will check
   ... do we want to include SHOULD/MUST/MAY information in these
   descriptions?

   [noises of general agreement]

   TimCole: going to spend the next little while cleaning this up
   with a view to sharing it more widely
   ... how are the test outputs recorded?

   ShaneM: anyone with an implementation can record their JSON
   test output and add it to a git repository which contains all
   the results

   TimCole: how do people want to break up the various test for
   bodies/targets/optional keys/etc.?

   ShaneM: speaking as "not an implementer" -- the smallest number
   of manual tests that get us the information we need is probably
   a reasonable guideline

   TimCole: probably a discussion for the mailing list

   <azaroth>
   [22]https://github.com/w3c/web-annotation/blob/gh-pages/model/w
   d2/examples/correct/anno41.json

     [22] https://github.com/w3c/web-annotation/blob/gh-pages/model/wd2/examples/correct/anno41.json

   azaroth: example 41 [^] is a completely contrived example at
   the end of the spec
   ... it seems unlikely (in the short term at least) that any
   client would generate such an annotation
   ... but perhaps not outside the realms of possibility
   ... putting in the 3.2 set of tests, it passes 8 but fails 5
   ... wondering what those are: problems with the test harness,
   the SHOULD problem, or problems with the data?

   TimCole: would need to have a look, but it's probably the
   SHOULD issue with multiple formats

   <Zakim> azaroth, you wanted to ask about fails for example 41

   azaroth: we should probably spend some time talking about
   protocol testing

Protocol Testing

   bigbluehat: have mostly passed the work I've done onto ShaneM

   ShaneM: the server tests bigbluehat are awesome, but let's talk
   about client tests for a second
   ... the server runs in the WPT environment

   <azaroth> ( issue
   [23]https://github.com/w3c/web-annotation/issues/344 )

     [23] https://github.com/w3c/web-annotation/issues/344

   ShaneM: the way annotations work is that an annotation
   collection lives at a IRI, and thus the server needs to serve
   at some named route within WPT

   <bigbluehat> PUT to overwrite

   ShaneM: but we need to work out how create/update/destroy
   operations work in the test server
   ... in particular because we don't want ephemeral data created
   on the test server
   ... so we're going to be arranging things such that data
   created by clients is destroyed as soon as it is read back from
   the server

   ivan: how is that going to work if you want to prepare a bunch
   of data and then run a load of tests against that?

   ShaneM: in those cases the client will access some static
   collection of annotations rather than data they created

   <azaroth> scribenick: azaroth

   nickstenn: Question about how this is going to work -- the
   protocol spec doesn't say what the server is supposed to do
   with the data that you give it, even reasonable things
   ... for example, in a distributed annotation system, you POST
   to create it, but you might not be able to read it back again
   straight away
   ... want us to be careful that we're not testing that you can
   read something straight away

   bigbluehat: The protocol says that it comes back with the full
   representation

   ivan: Comes back with an id

   nickstenn: That's a different point though. Could return it
   straight away, but the server doesn't necessarily have state
   beyond that
   ... intuitively reasonable assumptions are fine, but that's not
   in the spec
   ... need to be careful to not write tests on our understanding
   of the spec, but what the spec actually says

   bigbluehat: e.g. there's no guarantee that you'll be able to
   get the annotation back after you create it

   ShaneM: Wondering if the server tests you wrote rely on
   creating and then immediately retrieving it

   bigbluehat: would need to go through the tests to see if it
   requests the same ones later, I don't think so
   ... it puts stuff in but I don't think it checks again later

   ShaneM: If that's the case, then we're good

   <bigbluehat> this is the thing ShaneM's been mentioning btw:
   [24]https://github.com/BigBlueHat/web-annotation-protocol-teste
   r

     [24] https://github.com/BigBlueHat/web-annotation-protocol-tester

   ShaneM: in our last 2m, let's just agree on what we think is
   going to happen over the next week
   ... I have a couple of next actions from this conversation and
   will get on those straight away
   ... I have another to ensure people know how to run tests and
   upload results
   ... also working on getting the protocol stuff implemented

   TimCole: going to fill in a few more schemas to cover sections
   3 and 4
   ... if people could help make some invalid annotations to help
   test the failure cases, that would be helpful

   azaroth: I can do some of that

   ivan: When do you folks think we can begin to pester
   implementers to provide reports?

   TimCole: I think we need to do a reality check next Friday
   before we start inviting implementers -- maybe the week after
   that?

Adjourn

Summary of Action Items

Summary of Resolutions

    1. [25]Minutes of the previous call are approved:
       https://www.w3.org/2016/08/05-annotation-minutes.html

   [End of minutes]
     __________________________________________________________


    Minutes formatted by David Booth's [26]scribe.perl version
    1.144 ([27]CVS log)
    $Date: 2016/08/12 16:07:50 $

     [26] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
     [27] http://dev.w3.org/cvsweb/2002/scribe/

Received on Friday, 12 August 2016 16:10:12 UTC