Re: How many WCAG 2.1 SCs are testable with automated tests only?

I was pondering something along the same lines not so long ago. I'd say 
that for Group B, there are at least some cases where automated tools 
can (and currently do) check for common patterns in markup that are 
almost always guaranteed to be failures - depending on how 
thorough/complex the test is, you could for instance say that an <img> 
without any alt="" or alt="...", that is not hidden via display:none or 
aria-hidden="true" on it or any of its ancestors, and doesn't have an 
aria-label, nor something like role="presentation", is most likely to be 
a failure of 1.1.1 either because it's decorative but not suppressed, or 
contentful but lacking alternative, or if the alternative is there in 
some other form like a visually-hidden span then the <img> itself should 
be hidden, etc.

But overall agree that for a really solid pass/fail assessment, most of 
these definitely need an extra human to at least give a once-over to 
either verify automatically-detected problems that "smell" like 
failures, and to also look for things that a tool wouldn't be able to 
check such as very odd/obtuse markup/CSS/aria constructs.

P

> 1.1.1 Non-text Content (needs check if alternative text is meaningful)
> 1.2.2 Captions (needs check that captions are indeed needed, and that 
> they are not "craptions")
> 1.3.1 Info and Relationships (headings hierarchy, correct id references 
> etc - other aspects not covered)
> 1.3.5 Identify Input Purpose (needs human check that input is about the 
> user)
> 1.4.2 Audio Control (not sure from looking at ACT rules if this can work 
> fully automatically)
> 1.4.11 Non-Test Contrast (only for elements with CSS-applied colors)
> 2.1.4 Character Key Shortcuts (currently via bookmarklet)
> 2.2.1 Timing adjustable (covers meta refresh but not time-outs without 
> warning)
> 2.4.2 Page Titled (needs check if title is meaningful)
> 2.4.3 Focus order (may discover focus stops in hidden content? but 
> probably needs add. check)
> 2.4.4 Link purpose (can detect duplicate link names, needs add. check if 
> link name meaningful)
> 3.1.2 Language of parts (may detect words in other languages, probably 
> not exhausive)
> 2.5.3 Label in name (works only if labels that can be programmatically 
> determined)
> 2.5.4 Motion Actuation (may detect motion actuation events but would 
> need verification if alternatives exist)
> 3.3.2 Labels or Instrcutions (can detect inputs without linked labels 
> but not if labels are meaningful)
> 4.1.2 Name, Role, Value (detects inconsistencies such as parent/child 
> errors but not probably not cases where rules / attributes should be 
> used but are missing?)
> 
> I am investigating this in the context of determining to what extent the 
> "simplified monitoring" method of the EU Web Directive can rely on 
> fully-automated tests for validly demonstrating non-conformance - see 
> the corresponding article 
> https://team-usability.de/en/teamu-blog-post/simplified-monitoring.html
> 
> Are there any fully-automated tests beyond 1.4.3, 3.1.1 and 4.1.1 that I 
> have missed?
> 
> Best,
> Detlev
> 


-- 
Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke

Received on Tuesday, 20 August 2019 13:55:32 UTC