Re: [for review] ACT Review Process

Hey ACT'ers
In advance of our meeting in a little while, here is some of the feedback I
got from Deque:

1) We should break each submission into a single request - mixing too many
rules into a single request makes the feedback and decision making
difficult - even when tools like GitHub are used.
2) What about best practice rules? E.g. you don't have to have an H1 but
you really should try.
3) X should be 2 under implementations. Also - there should be a condition
of two implementations being shown to work before a rule makes it into a CR
4) What about false positives? How do we ensure that rules do not make it
into the fully automated portion of this spec that generate false positives?
5) contributions from Deque staff that could impact axe-core would require
an internal review before posting.

Wilco


On Thu, Jun 22, 2017 at 4:47 PM, Detlev Fischer <detlev.fischer@testkreis.de
> wrote:

> > I think these test cases may become very useful when related test rules
> > are contributed. Do you plan to make this fully public at a later time?
>
> Yes. The COMPARE repository will be read-only for the general public and
> read/write for experts/testers (I repeat my invitation to get signed up
> right
> now!)
>
> --
> Detlev Fischer
> testkreis c/o feld.wald.wiese
> Thedestr. 2, 22767 Hamburg
>
> Mobil +49 (0)157 57 57 57 45
> Fax +49 (0)40 439 10 68-5
>
> http://www.testkreis.de
> Beratung, Tests und Schulungen für barrierefreie Websites
>
> Shadi Abou-Zahra schrieb am 22.06.2017 14:40:
>
> > Hi Detlev,
> >
> > On 20/06/2017 11:00, Detlev Fischer wrote:
> >> Hi Shadi, ACT TF
> >> I had a look at the review Process document. The basic problem for me
> is to
> >> understand how the process (submission of a rule backed by supporting
> test
> >> cases) would work in practice, so I would think it is worthwhile taking
> one or
> >> a few non-trivial examples of real web content and looking what the
> rule(s)
> >> might look like that would be supportive when deciding about
> conformance. THis
> >> exercise would show the uninitiated how it's going to play out.
> >
> > Agree. I believe sample rules that comply with the current Rules Format
> > specification are being developed. We can also use them for trying out
> > and refining this review process.
> >
> >
> >> A complex and at the same time very frequent example might be something
> like
> >> drop-down navigation menus (take for example a recent discussion
> between Matt
> >> King and Mallory on the Webaim list - the tail end is here
> >> http://webaim.org/discussion/mail_message?id=34968 )
> >>
> >> What would rules look like that help me establish whether some menu
> conforms
> >> to 1.3.1, 2.1.1, 4.1.2, 2.4.3 etc? How can the rule be isolated from
> content
> >> aspects that may co-determine whether we think of some solution as
> acceptable
> >> or not (take the length of the submenus in cases where they are opened
> >> automaticlly when focused)? When does the aria menu pattern apply, and
> what
> >> deviations of the pattern are OK (conform) in what contexts?
> >>
> >> We all know the difficulty of attributing an issue to the right SC -
> when an
> >> element does not get tab focus but you CAN activate it when arrowing
> there,
> >> does it violate 2.1.1? Or only 2.4.3? If a main menu item opens the
> submenu
> >> and a second activation does not close it but goes to a section page,
> is that
> >> a usability issue or necessarily a fail of some SC? Etc, etc...
> >>
> >> So I believe working through a few practical real world implementations
> and
> >> showing how the ACT framework would support developers / testers in
> assessing
> >> real-world implementations would really help making the ACT activity a
> lot
> >> more tangible (it often feels quite abstract to me).
> >
> > Agree. Though this is slightly orthogonal to the review process itself.
> >
> >
> >> Finally, an invitation: ACT TF members wanting a test login to our
> COMPARE
> >> repository ( http://www.funka.com/en/projekt/compare/ ) are welcome -
> just
> >> give me a shout. The repository is early days, not yet in its final
> shape, and
> >> not yet public but it already has a few real world cases with
> accessiblility
> >> ratings. Should you want to add your rating, the comment field would
> give
> >> scope to outline the rules according to which someone has arrived at a
> PASS or
> >> FAIL conclusion. As a contributor of ratings you will be picking the SC
> (or
> >> multiple SCs) that you think should fail (or pass with comment).
> >
> > I think these test cases may become very useful when related test rules
> > are contributed. Do you plan to make this fully public at a later time?
> >
> > Best,
> >   Shadi
> >
> >
> >> Best,
> >> Detlev
> >>
> >>
> >>
> >> --
> >> Detlev Fischer
> >> testkreis c/o feld.wald.wiese
> >> Thedestr. 2, 22767 Hamburg
> >>
> >> Mobil +49 (0)157 57 57 57 45
> >> Fax +49 (0)40 439 10 68-5
> >>
> >> http://www.testkreis.de
> >> Beratung, Tests und Schulungen für barrierefreie Websites
> >>
> >> Shadi Abou-Zahra schrieb am 19.06.2017 19:54:
> >>
> >>> Dear ACT TF,
> >>>
> >>> Ref:
> >>> https://www.w3.org/WAI/GL/task-forces/conformance-
> testing/wiki/ACT_Review_Process
> >>>
> >>> As discussed during the call today, please review the outline for the
> >>> proposed ACT Review Process. Feel free to add your feedback to the wiki
> >>> discussion tab or by email.
> >>>
> >>> Regards,
> >>>    Shadi
> >>>
> >>> --
> >>> Shadi Abou-Zahra - http://www.w3.org/People/shadi/
> >>> Accessibility Strategy and Technology Specialist
> >>> Web Accessibility Initiative (WAI)
> >>> World Wide Web Consortium (W3C)
> >>>
> >>
> >>
> >
> > --
> > Shadi Abou-Zahra - http://www.w3.org/People/shadi/
> > Accessibility Strategy and Technology Specialist
> > Web Accessibility Initiative (WAI)
> > World Wide Web Consortium (W3C)
> >
>
>
>


-- 
*Wilco Fiers*
Senior Accessibility Engineer - Co-facilitator WCAG-ACT - Chair Auto-WCAG

Received on Monday, 26 June 2017 13:40:49 UTC