- From: Ivan Herman <ivan@w3.org>
- Date: Fri, 18 Mar 2016 17:23:43 +0100
- To: W3C Public Annotation List <public-annotation@w3.org>
- Message-Id: <8FB3E434-D891-4A3D-AF4D-C12A43126504@w3.org>
Minutes are here:
https://www.w3.org/2016/03/18-annotation-minutes.html
Text version below.
Ivan
----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704
[1]W3C
[1] http://www.w3.org/
Web Annotation Working Group Teleconference
18 Mar 2016
See also: [2]IRC log
[2] http://www.w3.org/2016/03/18-annotation-irc
Attendees
Present
Ivan Herman, Rob Sanderson (azaroth), Tim Cole, Doug
Schepers (shepazu), Ben De Meester, Takeshi Kanai, Dan
Whaley, Nick Stenning
Regrets
Davis Salisbury, Paolo Ciccarese, Benjamin Young,
Frederick Hirsch, Randall Leeds
Chair
Rob_Sanderson, Tim_Cole
Scribe
bjdmeest
Contents
* [3]Topics
1. [4]1. Scribe selection, Agenda review, Announcements?
2. [5]Minutes approval
3. [6]Issue CFC for publication of Model, Vocab and
Protocol
4. [7]Testing
* [8]Summary of Action Items
* [9]Summary of Resolutions
__________________________________________________________
1. Scribe selection, Agenda review, Announcements?
azaroth: also talk (quickly) about CFC
... if no large outstanding issues, let's issue a CFC within
one week
... any announcements?
timcole: meeting next week?
<TimCole> I will be present
<dwhly> I will be
I can be present
<takeshi> I will be present
TimCole: is there any specific topic to work on?
azaroth: testing would be the big one
TimCole: I'm happy to host a meeting about that next week
... how about I'll do an e-mail about that call?
azaroth: perfec
... t
Minutes approval
<azaroth> PROPOSED RESOLUTION: Minutes of the previous call are
approved:
[10]https://www.w3.org/2016/03/11-annotation-minutes.html
[10] https://www.w3.org/2016/03/11-annotation-minutes.html
RESOLUTION: Minutes of the previous call are approved:
[11]https://www.w3.org/2016/03/11-annotation-minutes.html
[11] https://www.w3.org/2016/03/11-annotation-minutes.html
Issue CFC for publication of Model, Vocab and Protocol
azaroth: we would like to have a CFC to publish the docs
... especially vocab needs more text, and diagrams to be done
... but to get feedback about the technicalities, more
visibility would be good
... we can do a one-week CFC via email
... any concerns about that?
... any issues that need to be addressed before? except for
ivan's comments
timcole: we don't have a vocab doc on W3C yet, I think
ivan: that was an index mistake, I changed that
<ivan> [12]http://w3c.github.io/web-annotation/model/wd2/
[12] http://w3c.github.io/web-annotation/model/wd2/
<ivan> [13]http://w3c.github.io/web-annotation/vocab/wd/
[13] http://w3c.github.io/web-annotation/vocab/wd/
<ivan> [14]http://w3c.github.io/web-annotation/protocol/wd/
[14] http://w3c.github.io/web-annotation/protocol/wd/
ivan: (these are the urls used for the CFC)
... these dates are close to this date
... these three documents would become the next versions of the
/TR WD's
timcole: draft should be updated quickly
ivan: it's very timely that we have these things published
azaroth: we don't have a shortname for the vocab yet
ivan: we need to have that
... in the CFC, best, we propose the shortname as well
... the final resolution e-mail should also have that shortname
azaroth: is annotation-vocab ok?
<TimCole> +1
<nickstenn__> +1
azaroth: seems consistent with the other shortnames
<ivan> +1
+1
ivan: on timing: the restriction I have, is that I am around
the week of the 28th of March, the week after that, I am away
for two consecutive weeks
... I propose we try to get this published on Thursday the 31st
<azaroth> +1 to 2016-03-31
ivan: the editor should prepare by going through all checkers
(i.e., link checkers)
... so there won't be any last-minute editorial problems
azaroth: I did it before and I'll do it again
... the model and protocol doc point to the vocab spec, but
there is not publication yet
ivan: you can have a local bibliography in respec
... the local bibliography should include all three documents
... I'll send you an example
azaroth: by the time we get through CR, we don't need a local
bib?
ivan: I think these version should always be dated URIs, I
think..
... it always puts the date of the publication there, but that
would be wrong..
... we have to update that until the REC version
azaroth: ok
... any other thoughts?
Testing
azaroth: last week, we talked about what we could ask the
clients and servers to conform to
... i.e., core model vs support support for different types of
selectors
shepazu: so the profiles-thing?
azaroth: yes, also syntactic vs semantic vs UI testing
... also about the tools to test with
... and about the W3C rules about what needs to be tested
shepazu: usually testing depends on conformance
ivan: question is: for the three things (vocab, model,
protocol): what do we want to test?
... profiles etc. is separate
... how was the LDP protocol tested?
azaroth: LDP had a test-kit that you could download and test
against your implementation
... it would do the various methods
... and generate a report
ivan: so a separate program (client-side) that tested the
server?
azaroth: yes, rather than a module incorporated into the server
... I can ask our developer that used it about his experience
ivan: for the protocol, that seems reasonable to me
shepazu: Chris and I talked about testing the model
... a validator would not be sufficient
... simply saying 'this incarnation validates to the model' is
not enough, that's testing some content
... we talked about creating a test page, i.e., the target of
the annotation
... the annotation client does the steps to create an
annotation (manual or automatic)
... that annotation is sent to a toy annotation server
... downloaded again into the client, and validated there, to
check whether the structure fits with the model
... that would be 1) reusable, and 2) actually test the client,
not only validate
timcole: that makes a lot sense
... but it conflates the testing of the protocol and the model
shepazu: it doesn't matter how the annotation is published to
the website, it does matter that it is published
... e.g., in CSS, they make a baseline
... if the client could simply give the annotation to the other
client-side code, that would also work, but would violate the
principle that the client would be inaccessible to other
clients
<azaroth> (LDP Implementation Report:
[15]https://dvcs.w3.org/hg/ldpwg/raw-file/default/tests/reports
/ldp.html )
[15] https://dvcs.w3.org/hg/ldpwg/raw-file/default/tests/reports/ldp.html
timcole: do we need something like: how does a server react to
the protocol: a report
... for the client: can it create and take correct annotation
and use them
... so, can the client also recognize incorrect annotations?
<azaroth> And the test suite:
[16]http://w3c.github.io/ldp-testsuite/
[16] http://w3c.github.io/ldp-testsuite/
timcole: I'm worried not having a method specified for checking
the generated annotations
... the process of sending it somewhere could miss errors
ivan: two things:
... first: not all implementations that we may want to test are
such that you can set up a web page that easily
... if the goal is to annotate data on the web, then I don't
see what the web page is
... I wouldn't restrict to only web page based implementations
... second: let's say we have that, the annotation system
produces the relevant structure, and we have to test whether
the structure is right, or makes mistakes
... what we did for RDFa, is that something happens, and
produce a clear structure
... for each of the tasks, we have the pattern that must be
generated
... and an automatic procedure that could compare results
... that requires that the annotation system can dump the
structure into an external file
... if we can't do automatic testing, we give the annotation
system some tasks, and the outputted structure we compare those
with what we expect that should happen
... I don't know whether we can do that automatically
shepazu: manual testing is more time-consuming, and doesn't
work well within our testing framework, but it might be
inevitable
... i.e., the W3C testing framework
<Zakim> azaroth, you wanted to ask Dan about AAK as potential
audience
ivan: that's not for all types of working groups
azaroth: Dan: about the AAK as potential audience: what would
be valuable, as testing implementations?
dwhly: it's a little premature for now
... coalition is currently group of publishers, not software
writers
... I think that's most useful for the upcoming F2F: what are
the use cases for annotation, and how do those use cases
articulate in an interoperable annotation layer?
... the technical people can triage the use cases to see what
works with waht the W3C is doing, and what not
shepazu: in similar situations, W3C sees that validators are
useful if other people want to use W3C annotation stuff
... seeing that they are doing the output correctly
... a validator would be a necessary component
... if other people want to use the Web Annotation model
timcole: So, single page won't be enough: shouldn't we identify
a set of annotation tests
... and seeing whether a client can generate a file which can
be validated and checked for errors
... I would like to see whether we can identify all test cases
that are in the model
shepazu: each test would have his own page, and that test would
also contain the passing criteria
timcole: is it feasible, given that we have different kinds of
objects and different kinds of implements, and some
implementations would only pass a part of the annotation tests
shepazu: that's about different conformance classes
... if your annotation client would only work with, e.g., image
objects, we test that client only for the relevant test cases
... W3C doesn't actually test implementations, it tests the
implementability of the specifications
... i.e., this feature of this spec was implemented
interoperably by two or more user agents
... if that feature does not have two passing implementations,
it is marked at risk and possibly removed
... until we have two passing implementations, or we move the
spec forward without that feature
timcole: I expect not to find two implementations that
implement all features
shepazu: you don't need that, it could be some kind of
combination of different clients
timcole: good, my first question was: can we find all test
cases
ivan: [about CSV working group]: each test had a scenario, data
file
... implementation had to produce something (JSON, metadata,
etc.)
... each of these tests were run separately
... each implementation had to validate itself and return the
results in some accepted format
... about 350 different use cases
... to cover the various features
... if we have a structure like that, we need a certain number
of scenarios
<Zakim> azaroth, you wanted to +1 tasks on sample resources
azaroth: I also like the idea of setting up a series of defined
tasks
... and possibly downloaded, tested, and uploaded again, or
done online if possible
... question is that we would a group of people to implement
the testing framework
nickstenn: what would the tests for the model look like?
... actually, this means processing annotation data like
clients would do it in the real world?
... but the model is about the semantics of the model, not
about implementations
<azaroth> +1 to not testing UA behavior on consumption of an
annotation
nickstenn: how could we possibly create a testing framework
that would work for all user agents?
... I think about giving a set of annotations, and asking to
implementers: can you client interpret it correctly?
timcole: the thing is that someone somewhere could implement a
tool that could generate, e.g., a selector
... secondly, could our testing framework distinguish between
correct and incorrect usage of a feature
... can a tool recognize the difference between a correct and
an incorrect annotation?
ivan: let's say we set up an HTML page
... and describe in human terms: this is the annotation the
user is supposed to do: select and comment
... the implementer would have to perform this task, and
internally, you would have to build up the annotation via the
annotation model
... and the implementation needs to dump the model in, e.g., a
JSON file
... then, the implementation shows that the model can describe
that action
... the other direction is that we provide annotation
structures, and see whether these annotation structures can be
understood by implementations
... it would be interesting to understand how current
annotation clients are tested
nickstenn: hypothesis tests on a granular level
ivan: this is a question for other implementers as well
... we need a feeling of what is realistic
azaroth: we only test syntax, not semantics
... we don't test the interaction
... if there are any comments about testing: put it on the
mailing list
... and continue next week
<nickstenn__> sounds good to me
azaroth: adjourn
<azaroth> Thanks to bjdmeest for scribing! :)
<ivan> trackbot, end telcon
Summary of Action Items
Summary of Resolutions
1. [17]Minutes of the previous call are approved:
https://www.w3.org/2016/03/11-annotation-minutes.html
[End of minutes]
__________________________________________________________
Minutes formatted by David Booth's [18]scribe.perl version
1.143 ([19]CVS log)
$Date: 2016/03/18 16:20:26 $
[18] http://dev.w3.org/cvsweb/%7Echeckout%7E/2002/scribe/scribedoc.htm
[19] http://dev.w3.org/cvsweb/2002/scribe/
Received on Friday, 18 March 2016 16:23:53 UTC