Draft Minutes from the 16 Sep 2016 Meeting

Draft minutes from the meeting at at
https://www.w3.org/2016/09/16-annotation-minutes.html

A textual version is below:

[image: W3C] <http://www.w3.org/>
- DRAFT -Web Annotation Working Group Teleconference16 Sep 2016

Agenda <http://www.w3.org/mid/033001d20f58$8f91ee50$aeb5caf0$@illinois.edu>

See also: IRC log <http://www.w3.org/2016/09/16-annotation-irc>
Attendees
PresentShaneM, Tim_Cole, TB_Dinesh, DAN_WHALEY, Takeshi_Kanai,
Paolo_CiccareseRegretsRob, BenjaminChairTimScribeShaneM
Contents

   - Topics <https://www.w3.org/2016/09/16-annotation-minutes.html#agenda>
      1. M<inutes
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item01>
      2. Minutes
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item02>
      3. Announcements
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item03>
      4. Issue Updates
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item04>
      5. Testing Updates
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item05>
      6. Testing
      <https://www.w3.org/2016/09/16-annotation-minutes.html#item06>
   - Summary of Action Items
   <https://www.w3.org/2016/09/16-annotation-minutes.html#ActionSummary>
   - Summary of Resolutions
   <https://www.w3.org/2016/09/16-annotation-minutes.html#ResolutionSummary>

------------------------------

<scribe> Scribe: ShaneM
M<inutes

<TimCole> PROPOSED RESOLUTION: Minutes of the previous call are approved:
https://www.w3.org/2016/09/09-annotation-minutes.html
Minutes

*RESOLUTION: Minutes of the previous call are
approved: https://www.w3.org/2016/09/09-annotation-minutes.html
<https://www.w3.org/2016/09/09-annotation-minutes.html>*
Announcements

<TimCole> See
https://lists.w3.org/Archives/Public/public-annotation/2016Sep/0058.html

Charter extended through February.
Issue Updates

<TimCole> None

*crickets*
Testing UpdatesTesting

There is an issue with the link into w3c-test.org with a deep path doesn't
seem to work right

TimCole: there might be an issue in the future when there are other test
collections

ShaneM: it works for me as well

dwhly: I will try to sort it out

TimCole: Janina mentioned that refreshing the window index.html sometimes
didnt work.
... Nick reported the timing of the text box means pasting before the page
populates will cause the informaiton int eh window to get cleared out.

ShaneM: That doesnt surprise me

TimCole: We did consolidate the tests. My impression from everyone is that
is better.

ShaneM: Dont change the anmes of any tests in the future.

TimCole: I think it is order sensitive too. I will not change names nor
order.

<tbdinesh> update: test links seem to work ok now. both. maybe its to do
with server load?

ivan: At the end when we are finished we will need to reconcile things. We
have different columns for the same implementation now at different times.

tbdinesh: oh, that's possible.

ShaneM: The various columns represent tests run with different input.

TimCole: If you have an implementation that generates different kinds of
annotations... for example Janina's can link images, annotation, or
transcribe... you end up with different annotations that use different
features of the model.
... so in order to see the complete set of features, you need to test
multiple annotations and then collapse the columns together to show the
various features supported.
... I still think that we should collapse the columns together.

ivan: I didnt know that. I am not really sure how we do that at the end.

TimCole: I think that means if there are any from the same implementation,
you OR them together.
... you may not want to do that with the mandatory tests.
... you might want to eliminate columns with failed mandatory tests.

ivan: from the CR point of view... which tests the spec. If an
implementation doesn't pass a required feature, I would expect essentially
the three columns to be identical for a correct implementation.

TimCole: It might be that an implementation hasn't implemented a new kind
of selector. We have an assertion that checks if you have SpecificResource
in your annotation and it has a selector it is one of the six of seven
types defined in the model.
... if I define a new sleector type, one of my annotations may not pass
that test.
... have I failed from a requirements standpoint? That's something we would
need to interpret. If we leave it a MUST test we need to decide if we can
ignore that kind of failure or not.

ivan: if it is an extension then it cannot be a must

TimCole: We have a section that defines how you extend the model.

ivan: if it is not a normative section then it is not a must
... so in this case the extension is not something we want to test. It
feels esoteric.
... for MUST tests all the columns should have pass and fail. So we can
throw out the aberrant results if some fail a mandatory test as long as
some tests PASS the mandatory tests for the same implementation.

TimCole: We should talk to Rob about this. We we try to make the sections
about extensions normative then we need to update the SHOULDS to MUSTS

(some discussion about extending the vocabulary...)

TimCole: let's not finish this here...

ivan: no. it is important. We need to be sure that we have not made
editorial errors.
... for example I see that the JSON-LD Frame is normative and it should not
be.
... the boilerplate for the spec says appendices are normative unless they
explicitly say they are informative

TimCole: Discussion of optional "fails". Should we report them at all?

ivan: I am a little bit messed up... what happens right now?

TimCole: for example we have 36 checks related to agents.

agents is optional. You SHOULD have an annotationCreator but it is not
required

scribe: if you don't have one, then there are a bunch of related tests that
will fail
... if you don't report those you will get 8 yellow boxes. If you report
the SHOULDs you will get some red and some yellow.

ivan: what are our options?

ShaneM: We don't really have any control. There is pass, or fail, or no
data (yellow)

ivan: Since it is not a MUST it is not really a fail.

TimCole: So how are we looking at the may requirements? If no one
implements text direction as a result of testing, does that mean we should
mark it at risk?

ivan: that seems crazy to me.

TimCole: It is not a SHOULD nor a MUST. it is in because people think it
might be useful.
... there might not be anyone who uses it.

ShaneM: can we actually mark things at risk now?

ivan: No, we cannot.
... If a SHOULD is not implemented, then it can be yellow... but I feel a
little uneasy about it.
... it is not FAIL in that it doesn't meet the specification.

ShaneM: THe features need to be called out.

TimCole: The SHOULDs seem to map to features.
... MAYs are not really features.

<Zakim> ShaneM, you wanted to ask about what we do to recreate the results?

ivan: the problem is that the reporting mechanism is too strict.

ShaneM: we can change it... but how?

ivan: On the report... if there is a SHOULD or MAY and if an implementation
doesn't do it then I would like to see that as a different entry that means
not implemented. Something that indicates it is not an error.
... the FAILS should be only appearing for the MUST features.
... for optional features there should be a way to indicate that an
implementation supports or does not support an optional feature.

ShaneM: What happens if an optional feature is supported but does not pass
the tests?

ivan: then that is an error.

TimCole: We have that in the mandatory tests now.
... Reviews what are "features" in the data model
... no options are listed as features.

ivan: then they should never me listed as a fail.

TimCole: They only thing fuzzy one agent class as related to an annotation.
With predicates creator and generator.
... they are currently listed as SHOULDs

ivan: I believe the agentClass is an option and has requirements

TimCole: In actuality the agents do not have this requirement

ivan: then those two entries do not mean anything
... hard to discuss without Rob, but there doesn't seem anothing relevant
here to exit criteria

TimCole: so in terms of the tests, we have a couple options.
... we could delete the tests. We could NOT report them failing. But we
could still gather the data (in case there is any debate later). Or we can
leave them as they are and they show red.

ivan: we sort of left it open. Is there a logic we can follow?

<Zakim> ShaneM, you wanted to make a proposal

<TimCole> Proposal: Shane has a version of the test library that will not
report when an optional assertion fails

<TimCole> ShaneM: If an assertion is a should or may and it fails, the test
does not report a result

<TimCole> ... the effect is a yellow box in the final report

<TimCole> ... suppose there is an assertion that the sky is pink. No
implementation passes, so no row will appear for this assertion

<TimCole> ... it's yellow for all others if at least one implementation
uses and all others don't

TimCole: I have a reference implementation that supports every optional

ivan: I don't mind that being in the report. It is a real implementation.
... I dont like it to look like a FAIL on a SHOULD is the same as a FAIL on
a MUST

ShaneM: Isn't it okay to have that indicated int he name

ivan: No - it isn't. People don't look closely.

TimCole: If we have the implemenmtation that ignores shoulds and mays,
let's try it.
... For example in one implementation there are three real fails. I hope
they choose to fix those before the end of CR. If fixed they would have
green for everything mandatory and some optionals
... they would have yellow for everything else.

<Zakim> ivan, you wanted to comment on something else

ivan: Ralph and I were looking at the reports. A minor comment. We use in
the text "Reference Implementation". We have not defined it. We should use
a different term.

TimCole: I didn't realize that

ShaneM: I did that - I called it the RI.

TimCole: No meeting next week because of TPAC
... meeting the week after that. Need to prioritize how to reach out to OA
implementors that illustrate changes to the current model.
... changes of key names and a few other things. We have not collated those
all in one place.

ivan: wait... I thought that
... in the model document there is appendix F.3. It details the changes.

TimCole: You're right. Good!

ivan: Might not be anything in the vocab but I think this is done. We may
need more explanation.

TimCole: I will talk to Rob about it for next week.

Next meeting in two weeks
Summary of Action ItemsSummary of Resolutions

   1. Minutes of the previous call are approved:
   https://www.w3.org/2016/09/09-annotation-minutes.html
   <https://www.w3.org/2016/09/16-annotation-minutes.html#resolution01>

[End of minutes]
------------------------------
Minutes formatted by David Booth's scribe.perl
<http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm> version
1.144 (CVS log <http://dev.w3.org/cvsweb/2002/scribe/>)
$Date: 2016/09/16 16:01:17 $
------------------------------
Scribe.perl diagnostic output[Delete this section before finalizing the
minutes.]

This is scribe.perl Revision: 1.144  of Date: 2015/11/17 08:39:34
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/cruckets/crickets/
Found Scribe: ShaneM
Inferring ScribeNick: ShaneM
Present: ShaneM Tim_Cole TB_Dinesh DAN_WHALEY Takeshi_Kanai Paolo_Ciccarese
Regrets: Rob Benjamin
Agenda: http://www.w3.org/mid/033001d20f58$8f91ee50$aeb5caf0$@illinois.edu
Found Date: 16 Sep 2016
Guessing minutes URL: http://www.w3.org/2016/09/16-annotation-minutes.html
People with action items:


[End of scribe.perl
<http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm> diagnostic
output]

-- 
Shane McCarron
Projects Manager, Spec-Ops

Received on Friday, 16 September 2016 16:05:31 UTC