Meeting minutes, 2016-03-25

Meeting minutes are here:

https://www.w3.org/2016/03/25-annotation-minutes.html

text version below

----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704



   [1]W3C

      [1] http://www.w3.org/

              Web Annotation Working Group Teleconference

25 Mar 2016

   [2]Agenda

      [2] https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0104.html

   See also: [3]IRC log

      [3] http://www.w3.org/2016/03/25-annotation-irc

Attendees

   Present
          Ben De Meester (bjdmeest), Benjamin Young, Chris_Birk,
          Tim Cole, Ivan Herman, Kyrce Swenson, Paolo Ciccarese,
          Randall Leeds, Shane McCarron (ShaneM),  TB Dinesh,
          Takeshi Kanai,

   Regrets
          Nick, Dan_Whaley

   Chair
          Tim Cole

   Scribe
          bjdmeest

Contents

     * [4]Topics
         1. [5]Acceptance of Minutes:
            https://www.w3.org/2016/03/18-annotation-minutes.html
         2. [6]Results of the CFC
         3. [7]Welcome ShaneM
         4. [8]Testing
     * [9]Summary of Action Items
     * [10]Summary of Resolutions
     __________________________________________________________

   <TimCole> Agenda:
   [11]https://lists.w3.org/Archives/Public/public-annotation/2016
   Mar/0104.html

     [11] https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0104.html

   <TimCole> scribenick: bjdmeest

Acceptance of Minutes:
[12]https://www.w3.org/2016/03/18-annotation-minutes.html

     [12] https://www.w3.org/2016/03/18-annotation-minutes.html

   <TimCole> PROPOSED RESOLUTION: Minutes of the previous call are
   approved:
   [13]https://www.w3.org/2016/03/18-annotation-minutes.html

     [13] https://www.w3.org/2016/03/18-annotation-minutes.html

   TimCole: any concerns?

   RESOLUTION: Minutes of the previous call are approved:
   [14]https://www.w3.org/2016/03/18-annotation-minutes.html

     [14] https://www.w3.org/2016/03/18-annotation-minutes.html

Results of the CFC

   RESOLUTION: CFC was sent out last week
   ... 13 plus ones since a couple of minutes ago
   ... no 0 or -1s
   ... so CFC is accepted
   ... any concerns on the call?

   <TimCole> [15]https://github.com/w3c/web-annotation/issues/186

     [15] https://github.com/w3c/web-annotation/issues/186

   RESOLUTION: the CFC was under the assumption that the minor
   editorial issues would be addressed before publishing
   ... everything done except for one

   ivan: that's fine now, remaining is for the future
   ... if we decide to make a FPWD, there are two (minor)
   consequences
   ... first: history of the splitting up is lost
   ... second: patent policy would require to start from scratch
   for that document
   ... so at least 6 months needed between FPWD and REC
   ... that's not ideal
   ... I discussed
   ... result:
   ... Vocab doc is published as FPWD, and previous version is the
   Model doc
   ... so consequences are resolved
   ... practical consequence is a small editorial change

   <TimCole> Proposed Resolution: publish all 3 as Working Draft,
   with previous draft for Vocabulary being the earlier Data Model
   draft

   RESOLUTION: publish all 3 as Working Draft, with previous draft
   for Vocabulary being the earlier Data Model draft

   ivan: question is: are the documents as they are today final
   and ready to be published?

   bigbluehat: yes

   ivan: also: have they been checked by link checker and html
   checker etc.?

   paolociccarese: We can do it again

   ivan: I will have to check it to be safe, but if you guys could
   do that by Monday, I can do the rest of the admin on Monday.
   ... PaoloCiccarese, also change the previous version of the
   Vocab doc to the Model doc, as discussed
   ... I will pick it up from there

   ShaneM: Ivan, should SoTD be updated to say this is a split?

   ivan: Paolo can do it as well, but yes!
   ... in the status of the Vocab document, there should be an
   extra sentence that this is a split from the Model doc

Welcome ShaneM

   ShaneM: I've been with W3C since '97
   ... I;m with spec ops, doing standards related work
   ... Shepazu contacted me about testing
   ... I have a ton of questions
   ... I've been doing standards work since '85, and testing since
   '87

   ivan: Shane is modest. He was one of the main editors for the
   RDFa spec
   ... which might be useful
   ... he also co-maintains respec

   TimCole: other anouncements?

   <ShaneM> It is the section entitled "Status of this document"

Testing

   <TimCole>
   [16]https://www.w3.org/2016/03/18-annotation-minutes.html#test

     [16] https://www.w3.org/2016/03/18-annotation-minutes.html#test

   TimCole: There are some notes in last week's minutes
   ... we have to look into our documents, find the features that
   are described, and provide for a strategy to test these
   features
   ... and make sure they are unambiguous and implementable
   ... I welcome help
   ... Identifying the features is important, implementing them
   also
   ... particularly the selector approach might not be implemented
   in every system
   ... we have to have at least two implementations for each
   feature
   ... how do we go on with this?

   ivan: the real question (for the model) is: what is really what
   we want to test?
   ... it is a vocabulary for a specific usage, we have to
   identify what we want to test
   ... the current direction is that we define specific scenarios
   ... an implementation should show that these scenarios can be
   mapped on the correct annotation structures
   ... and maybe also the other way around: annotation structures
   should be understood by implementations and in some way tell us
   what they would do with these annotations
   ... questions are: does that provide a reasonable way of
   testing the spec, and can this be translated into proper tools?
   ... we have to be very careful that we are not testing
   implementations, but we are testing the spec

   <Zakim> ShaneM, you wanted to disagree a little with ivan about
   what must be tested...

   TimCole: about the scenarios: a suggestion was to have sample
   resources to be tested, to illustrate the features

   ShaneM: we're not testing implementations was said...
   ... but each feature must be implemented
   ... so we test the implementations implementing the features

   ivan: if my goal would be to test the implementation, I would
   use automatic tests
   ... in this case, we test the specifications, we can ask
   implementers to send a report, without knowing how they tested
   their implementation

   ShaneM: In the Web annotation spec, there are requirements on a
   lot of different actors
   ... data structure requirements
   ... client requirements
   ... interpretation requirements
   ... server requirements
   ... I assume you want to test all of them?

   TimCole: I think so
   ... question was: what does an implementation have to do with
   an existing annotation to make sure it interprets it correctly?

   ShaneM: you don't have behaviour requirements
   ... or UI requirements
   ... so that makes your testing burden lower

   ,,, here, you just have to ensure that consumers of the data
   format receive the data format intact

   scribe: you can ignore SHOULDs for testing purposes
   ... I would focus on the MUSTs and MUST NOTs

   <tbdinesh> +1 for Tim

   scribe: Talking about MUSTs
   ... there's some data structural requirements, which come for
   free via JSON-LD
   ... so testing conforming output is probably kind of manual
   ... e.g. these N selection annotations needs to be tested
   ... you don't want to test if the region makes sense or the CSS
   selector is correct etc.

   ivan: Let's say that we describe a scenario: here's an SVG
   file, we want to put an annotation on this circle on the
   upperhand corner
   ... the resulting annotation structure should correspond with
   an annotation put on that corner
   ... in the output, we assume an SVGSelector to the correct
   corner
   ... so we need to check for correct JSON-LD, and correct to our
   spec (i.e., that it's an SVGSelector)
   ... but we don't have to check that the SVGSelector actually
   selects the correct target?

   ShaneM: you go into that depth, but I'm not sure it's required
   ... because there are a lot of ways of specifying that region
   ... suppose you have a server, he's going to analyze that
   annotation, but it's hard to analyze every detail
   ... you would need an SVGrenderer
   ... you could do that manually, but that very consuming

   ivan: I was considering human testing, but that's very time
   consuming

   ShaneM: I always look at this as: what can we automate?

   ivan: the only advantage is that the model is relatively simple
   ... we don't have a huge number things to test

   ShaneM: there are some combinatorial things that I have noted

   TimeCole: Manual can be very expensive, we thought about: I
   have a specific scenario: this is the image, this is the exact
   circle to annotate, and that should limit the number of ways to
   do it
   ... e.g., textQuoteSelectors doesn't have that many ways

   ShaneM: depends on what kind of constraints you want to put on
   the client
   ... talking about textRangeSelection
   ... you allow for a number of ways to express
   ... not all clients will implement all ways
   ... And I assume the client decides which expression will be
   the right way
   ... depending on the context
   ... do you want to require for test X that the client gives a
   CSS selector, and test Y gives a textRangeSelector

   ivan: that doesn't sound right

   ShaneM: another way would be: here's a sample document, select
   these 5 words
   ... the server-side should check: which expression does it use,
   and is that correct?
   ... that way, you simplify the test matrix, without testing
   every possible combination
   ... you can say: the textselector works

   ivan: it can happen that one of the selectors is never used
   ... because another combination of selectors always works for
   the given scenarios
   ... does that mean that that selector shouldn't be in the spec

   ShaneM: you could say it becomes a feature at risk
   ... or, it could be part of the testing cycle
   ... e.g., are there ways to require the feature

   TimCole: one extra thing before the end of the call
   ... who on the WG could help to build the list of features?
   ... someone (pref. not the editors) should identify the
   features in the docs
   ... certainly the MUSTs, maybe the SHOULDs

   ivan: we have to be careful to minimize people's time
   ... we have to know in what format to make these scenarios

   <Zakim> ShaneM, you wanted to say that you should also identify
   if requirements are on a repository vs. a generator vs. a
   consumer

   ShaneM: it's useful, I've been doing it, I think one of the
   critical pieces are checking whether MUSTs and SHOULDs are
   really MUSTs and SHOULDs
   ... and also, what parts of the system these features are one
   ... we need to compartimentalize by
   ... repository vs. a generator vs. a consumer
   ... in terms of interoperability, we should make sure any
   generator can talk to any repository

   ivan: you have implementations not bound to a specific server
   ... there are 2 directions of testing:
   ... scenario-based
   ... and from annotation-structure to interpretation of the
   implementation

   TimCole: I don't know whether the compartiments are already
   named very well

   PaoloCiccarese: I don't have specific testing-ideas
   ... I didn't have to test for different servers or clients
   ... we changed the specs over time, in my case, we used SPARQL
   validators for the model
   ... and adapted over time

   TimCole: The Annotation Community Group identified 55(?) MUSTs
   and SHOULDs, and validated semantically using a set of SPARQL
   queries
   ... two for every must and should
   ... one to check whether a feature applied, a second to
   validate
   ... but the spec has changed since then
   ... and not only semantic anymore
   ... there must be three components, but there not defined yet

   <PaoloCiccarese> +1

   ShaneM: I agree, and I would structure the tests that way

   ivan: I would be very helpful if Paolo could give a small
   description of how his implementation was tested
   ... it would help me, but maybe also Shane
   ... maybe I could ask Takeshi to do something similar

   takeshi: I have some, but I was thinking about modifying the
   testing files to test for internationalitation

   <ShaneM> Every little bit helps

   ivan: having a good feeling of what is currently being done
   would be very helpful

   ShaneM: every little piece will come together in the whole
   ... tests should be as discrete as possible
   ... big (i.e., integration of system) tests exist
   ... but the small tests show where something goes down
   ... e.g., the same scenario needs to be tested for 11 different
   variables

   PaoloCiccarese: critical point would be the republishing of
   annotations
   ... I'm not sure we have the solution to that
   ... it will be interesting to test
   ... it will be cross-system testing
   ... test system a, then system b, then send from a to b, and
   have certain expectations
   ... it's one of the most delicate points
   ... duplications of annotations will make things go out of
   control
   ... but it's the web, so it will happen
   ... about my current testing: I mostly do code testing, very
   tailored, lots of RDF
   ... so testing is many roundtrips to the triple store

   TimCole: There's a need to talk about what we think are the
   compartiments (generator, consumer, repository)
   ... then, we need to talk about scenarios and features
   ... to make some progress before F2F, for next week we might
   also talk about this testing topic

   ivan: email discussion is fine at the moment

   ShaneM: I'll put issues or send questions to the mailing list

   ivan: will you join the WG?

   ShaneM: I'll ask, I know the drill

   TimCole: [adjourn]
   ... next week: F2F, conformance
   ... bye!

   <ivan> trackbot, end telcon

Summary of Action Items

Summary of Resolutions

    1. [17]Minutes of the previous call are approved:
       https://www.w3.org/2016/03/18-annotation-minutes.html
    2. [18]CFC was sent out last week
    3. [19]the CFC was under the assumption that the minor
       editorial issues would be addressed before publishing
    4. [20]publish all 3 as Working Draft, with previous draft for
       Vocabulary being the earlier Data Model draft

   [End of minutes]
     __________________________________________________________


    Minutes formatted by David Booth's [21]scribe.perl version
    1.144 ([22]CVS log)
    $Date: 2016/03/25 16:13:20 $

     [21] http://dev.w3.org/cvsweb/%7Echeckout%7E/2002/scribe/scribedoc.htm
     [22] http://dev.w3.org/cvsweb/2002/scribe/

Received on Friday, 25 March 2016 16:16:19 UTC