- From: Tim Cole <t-cole3@illinois.edu>
- Date: Wed, 14 Sep 2016 14:30:34 -0500
- To: "'Shane McCarron'" <shane@spec-ops.io>, "'Ivan Herman'" <ivan@w3.org>
- CC: "'W3C Public Annotation List'" <public-annotation@w3.org>
- Message-ID: <022e01d20ebe$7f65aa70$7e30ff50$@illinois.edu>
Shane- Speaking for myself personally, I think this would be a very good idea for all MAY tests. But I would prefer to report all SHOULD fails, even though this means we will see as red a few should test fails that only fail because the parent feature is absent -- you can't have a creator agent type if you don't have a creator, theology aside. (Note - while we could make json schemas pass creator agent type in the absence of a creator, this is not what we want since it would be read in the results as an annotation failed to implement creator (separate test) but did implement creator agent type. Not true.) If you want to make this behavior configurable by adding a 'skipOptionals' flag, I would have no objection. It would provide some flexibility to change our mind or to report out fails on selected MAY assertions that we decide should be reported in all cases. But I would propose as a starting position that if such a flag becomes available we set this flag on all MAY assertions and not set it on any SHOULD assertions. Out of 179 total test assertions for sections 1 through 4 of the model: * 54 are MUST, * 33 are SHOULD * 92 are MAY. How do others in the WG feel about reporting only sucess for MAY tests? Any objections? How about with regard to SHOULD tests? Thanks, Tim Cole From: Shane McCarron [mailto:shane@spec-ops.io] Sent: Wednesday, September 14, 2016 11:18 AM To: Ivan Herman <ivan@w3.org> Cc: W3C Public Annotation List <public-annotation@w3.org> Subject: Re: Concept about non-mandatory tests Hilarious... You can see an example at http://pandora.aptest.com/specops/test-results/annotation-model/all.html <https://urldefense.proofpoint.com/v2/url?u=http-3A__pandora.aptest.com_specops_test-2Dresults_annotation-2Dmodel_all.html&d=CwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=zjI0r-H6xRs5fYf2_jJkju6US9ijk0nLw4ns2nuwU2k&m=6Y63xDcsXdHgCM4E8jWrAFvojMbnY0mWYNsZE8Xj-T4&s=Vc-_-34cIdPtkkyvPXyCxAHvUl42KppwnDMf-UF8d5k&e=> I ran tests in "SM01" against a patched version. On Wed, Sep 14, 2016 at 10:54 AM, Ivan Herman <ivan@w3.org <mailto:ivan@w3.org> > wrote: On 14 Sep 2016, at 17:27, Shane McCarron <shane@spec-ops.io <mailto:shane@spec-ops.io> > wrote: I just had a thought. I know that we have the fail and skip option (and pass and skip) but it is a problem for nested tests. I might even know why it is a problem. But instead of adding weird flow control, what if we just dont "run" tests that are should or may tests and they don't pass. Just don't run the assertion at all. The code actually does the "test" of the assertion before it calls the WPT "test" function. We can just not call that function at all if it is a SHOULD or MAY test and it doesn't match the success criteria. I think this would just result in "yellow" cells on the report for options that are not supported. Visually, if this works, it looks perfect. Red is scary:-) Ivan What do people think? -- Shane McCarron Projects Manager, Spec-Ops ---- Ivan Herman, W3C Digital Publishing Lead Home: http://www.w3.org/People/Ivan/ <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.w3.org_People_Ivan_&d=CwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=zjI0r-H6xRs5fYf2_jJkju6US9ijk0nLw4ns2nuwU2k&m=6Y63xDcsXdHgCM4E8jWrAFvojMbnY0mWYNsZE8Xj-T4&s=EddDweaBi0JqAomMMbFQ6aND7u4wsixLfT4yOKP49DU&e=> mobile: +31-641044153 <tel:%2B31-641044153> ORCID ID: http://orcid.org/0000-0003-0782-2704 <https://urldefense.proofpoint.com/v2/url?u=http-3A__orcid.org_0000-2D0003-2D0782-2D2704&d=CwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=zjI0r-H6xRs5fYf2_jJkju6US9ijk0nLw4ns2nuwU2k&m=6Y63xDcsXdHgCM4E8jWrAFvojMbnY0mWYNsZE8Xj-T4&s=udyFARRAgZtAAcK4jjBeBPqRXA4FB3ewMK9CPNMfyYw&e=> -- Shane McCarron Projects Manager, Spec-Ops
Received on Wednesday, 14 September 2016 19:31:25 UTC