- From: Ivan Herman <ivan@w3.org>
- Date: Fri, 26 Aug 2016 18:12:10 +0200
- To: W3C Public Annotation List <public-annotation@w3.org>
- Message-Id: <A558C5DD-9376-4609-9D56-99C7CEC22946@w3.org>
Minutes are here:
https://www.w3.org/2016/08/26-annotation-minutes.html
text version below.
Cheers
Ivan
----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704
[1]W3C
[1] http://www.w3.org/
Web Annotation Working Group Teleconference
26 Aug 2016
[2]Agenda
[2] http://www.w3.org/mid/010201d1ff15$8813a540$983aefc0$@illinois.edu
See also: [3]IRC log
[3] http://www.w3.org/2016/08/26-annotation-irc
Attendees
Present
Shane McCarron, Jacob Jett, Rob Sanderson, Ivan Herman,
TB Dinesh, Benjamin Young, Dan Whaley, Tim Cole, Paolo
Ciccarese, Takeshi Kanai
Regrets
Ben_De_Meester
Chair
Rob, Tim
Scribe
Jacob
Contents
* [4]Topics
1. [5]Last weeks minutes
2. [6]Exit Criteria
3. [7]Implementers doing testing
* [8]Summary of Action Items
* [9]Summary of Resolutions
__________________________________________________________
Last weeks minutes
<TimCole> PROPOSED RESOLUTION: Minutes of the previous call are
approved:
[10]https://www.w3.org/2016/08/19-annotation-minutes.html
[10] https://www.w3.org/2016/08/19-annotation-minutes.html
<ivan> +1
<TimCole> +1
<ivan> scribenick: Jacob
<azaroth> +1
<ShaneM> +1
RESOLUTION: Minutes of the previous call are approved:
[11]https://www.w3.org/2016/08/19-annotation-minutes.html
[11] https://www.w3.org/2016/08/19-annotation-minutes.html
Exit Criteria
<azaroth> Rob's proposed text:
[12]https://rawgit.com/w3c/web-annotation/c5f2fdeeb7faad37af534
e5b057ce03d92aada44/model/wd2/index.html#candidate-recommendati
on-exit-criteria
[12] https://rawgit.com/w3c/web-annotation/c5f2fdeeb7faad37af534e5b057ce03d92aada44/model/wd2/index.html#candidate-recommendation-exit-criteria
<azaroth> And:
[13]https://rawgit.com/w3c/web-annotation/3af99d57ec3ec645b17e4
961cb63974f07a22feb/protocol/wd/index.html#candidate-recommenda
tion-exit-criteria
[13] https://rawgit.com/w3c/web-annotation/3af99d57ec3ec645b17e4961cb63974f07a22feb/protocol/wd/index.html#candidate-recommendation-exit-criteria
ivan: will make an editorial change to put the exit criteria in
all three documents
TimCole; what needs to go into that summary? do we need some
ancillary documentation to clarify the text that goes in?
ivan: essential part would be the only part in the documents,
can link to other documents for greater details
TimCole: once a test is up and running, may be worthwhile to
get the summary of assertions, e.g., what are we testing?
... what things are we claiming are features [of the model]?
<ShaneM> I have put a PR into the test results tree:
[14]https://github.com/w3c/test-results/pull/32
[14] https://github.com/w3c/test-results/pull/32
TimCole: fairly easy to summarize at a high-level but as has
been pointed out, we reuse properties on different objects
... is the feature the property or the combination of property
on a particular object?
<Zakim> ShaneM, you wanted to ask "feature of what?"
azaroth: had previously decided to treat the combination of
properties on objects to be the feature, e.g., the name on an
agent
ShaneM: feature of model? feature of vocabulary? feature of
what?
... need more context
azaroth: need exit criteria for both; if they are not the same
then not sure what the criteria for vocab would be
ShaneM: focus on model for now
... need to test the properties in their contexts, e.g., target
at top level means something different than target at a deeper
level
TimCole: would treat agent as creator of annotation as a
feature, agent as creator of a specific resource as a feature,
etc.
<ShaneM> can't we just say "2 independent implementations of
each feature" ?
<ShaneM> Long discussion of draft text for CR exit criteria.
Rob had proposed text in github:
<ivan> [15]proposal for the protocol
[15] https://rawgit.com/w3c/web-annotation/3af99d57ec3ec645b17e4961cb63974f07a22feb/protocol/wd/index.html#candidate-recommendation-exit-criteria
<ivan> [16]proposal for the model
[16] https://rawgit.com/w3c/web-annotation/c5f2fdeeb7faad37af534e5b057ce03d92aada44/model/wd2/index.html#candidate-recommendation-exit-criteria
ivan: what to do about the vocabulary?
azaroth: could reuse the same proposal as the one for the
model; hard to test the vocab separately from serializations
ivan: need to test the model when expressed in ttl is valid rdf
<Zakim> ShaneM, you wanted to note that we delivered a
serialization of the vocab. testing its implementation is
testing to make sure that multiple processors can parse it.
ShaneM: our implementation is the json-ld context
azaroth: context + ontology, those two together
<ShaneM> need to pull the context document into various JSON-LD
processors
ivan: did we systematically check for validation of the
context, check for production of valid rdf, etc.?
<azaroth>
[17]https://github.com/w3c/web-annotation/blob/gh-pages/model/w
d2/check_egs.py
[17] https://github.com/w3c/web-annotation/blob/gh-pages/model/wd2/check_egs.py
TimCole: hasn't been done systematically yet
... do we need to include annotations submitted by
implementers?
<ShaneM> transitivity supports the validity
ivan: no, if the context doc is okay then their annotations
will be ok
<TimCole> PROPOSAL: Go forward with Rob's drafts and our
discussion about Vocabulary Exit Criteria (the last will be
reviewed via email).
<ivan> +1
+1
<azaroth> +1
<TimCole> +1
<ShaneM> +1
<ShaneM> Should it be in a branch?
<TimCole> Ivan will make sure these get published once ready in
gitHub
<azaroth> It's in exit-criteria at the moment
RESOLUTION: Go forward with Rob's drafts and our discussion
about Vocabulary Exit Criteria (the last will be reviewed via
email).
Implementers doing testing
<ShaneM>
[18]http://w3c-test.org/tools/runner/index.html?path=/annotatio
n-model
[18] http://w3c-test.org/tools/runner/index.html?path=/annotation-model
TimCole: discussed emailing implementers last week, have yet to
move on that
... (@implemeters on call) first set of tests are up, more
going up every later today and report
... want implementers to start using tests
... who should we prod to use these?
<ShaneM> Anyone can run tests here:
[19]http://w3c-test.org/tools/runner/index.html?path=/annotatio
n-model
[19] http://w3c-test.org/tools/runner/index.html?path=/annotation-model
dwhly: building a universal client architecture
... will take the summary here and ask tech team what they need
to move forward
... want to test interoperability
TimCole: Nick's feedback on whether or not the testing process
makes sense will be helpful, even if a bit of a distraction
<ShaneM> For example, is this page usable?
[20]http://w3c-test.org/annotation-model/annotations/annotation
AgentOptionals-manual.html
[20] http://w3c-test.org/annotation-model/annotations/annotationAgentOptionals-manual.html
<tbdinesh> I can use that too so we can also start with tests
PCiccarese: still updating the client aspects of domeo and
annotea, so not doing useful implementations atm
azaroth: no distinction of where the implementation is
(client-side or server-side)
PCiccarese: so if old server could produce a new annotation,
that would count as an implementation?
<Zakim> ShaneM, you wanted to point out that the protocol tests
just speak the protocol.
<tbdinesh> but to read it back might be harder (paolo)
<azaroth> Protocol implementations won't be a problem, I think.
<ShaneM> azaroth: yay!
<azaroth> Benjamin and I each have one, plus I know of two
others
TimCole: if tests are in good shape next week (which seems to
be the case), will contact several other implementers to engage
with the tests
<Zakim> ShaneM, you wanted to note that I think that the
protocol tests include a server implementation.
TimCole: also adding to the web repo readme some info about how
to use the schemas locally
... Rob helping with the python
... Jacob already gave some text describing the ajv / node.js
process
... want to have some experience with implementers by the end
of next week so that our extension request has some basis
ShaneM: protocol test q: submitting the server wpt first, is it
okay if the server tests up initially?
... e.g., tests exercising the server are nearly ready to
submit, but the ones testing a client are farther away from
readiness, is it okay to push on with out the other?
ivan: need to get the extension request to w3c sooner rather
than later, need to demonstrate that we have things that can be
relied on
... would be great if by 2 weeks from now the
testing/implementation repo is not empty
... even incomplete results are good
... so whatever we have
ShaneM: so we could begin generating reports later today
... concerned that we haven't actually looked at the tests
... need to make sure the results match our expectations
TimCole: model tests, excepting the annotation collections
should go up over the weekend
... do need to look at the validation (pass/fail), how to
generate the report to make clear the differences between the
examples from the documentation, e.g., ex.1 (no substantial
features) vs. ex. 42 (an annotation collection
... vs. ex. 44 (many substantial features)
<ivan> trackbot, end telcon
Summary of Action Items
Summary of Resolutions
1. [21]Minutes of the previous call are approved:
https://www.w3.org/2016/08/19-annotation-minutes.html
2. [22]Go forward with Rob's drafts and our discussion about
Vocabulary Exit Criteria (the last will be reviewed via
email).
[End of minutes]
__________________________________________________________
Minutes formatted by David Booth's [23]scribe.perl version
1.143 ([24]CVS log)
$Date: 2016/08/26 16:09:55 $
[23] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
[24] http://dev.w3.org/cvsweb/2002/scribe/
Received on Friday, 26 August 2016 16:12:20 UTC