W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > July to September 2019

How many WCAG 2.1 SCs are testable with automated tests only?

From: Detlev Fischer <detlev.fischer@testkreis.de>
Date: Tue, 20 Aug 2019 15:45:41 +0200
To: WCAG group <w3c-wai-gl@w3.org>
Message-ID: <40c6e980-a99b-d707-744d-b1f59a6fd68f@testkreis.de>
Hi,
as an exercise, I am just going through the WCAG 2.1 AA Success 
Criteria, trying to establish, by looking at the results of tools like 
axe and by looking at ACT rules, how many SCs can be tested with 
automatic checks alone (Group A) and how many can be partially 
automatically checked, requiring an additional human check (Group B).

This is my list so far for Group A (just automated check)

1.4.3 Text Contrast (with the exception of text as image and edge cases 
- absolutely positioned elements?)
3.1.1 Language (provided that the main language of the page can be inferred)
4.1.1 Parsing (W3C Validation check then after applying Validate 
bookmarklet)


This is my list so far for Group B (automated check followed by human check)

1.1.1 Non-text Content (needs check if alternative text is meaningful)
1.2.2 Captions (needs check that captions are indeed needed, and that 
they are not "craptions")
1.3.1 Info and Relationships (headings hierarchy, correct id references 
etc - other aspects not covered)
1.3.5 Identify Input Purpose (needs human check that input is about the 
user)
1.4.2 Audio Control (not sure from looking at ACT rules if this can work 
fully automatically)
1.4.11 Non-Test Contrast (only for elements with CSS-applied colors)
2.1.4 Character Key Shortcuts (currently via bookmarklet)
2.2.1 Timing adjustable (covers meta refresh but not time-outs without 
warning)
2.4.2 Page Titled (needs check if title is meaningful)
2.4.3 Focus order (may discover focus stops in hidden content? but 
probably needs add. check)
2.4.4 Link purpose (can detect duplicate link names, needs add. check if 
link name meaningful)
3.1.2 Language of parts (may detect words in other languages, probably 
not exhausive)
2.5.3 Label in name (works only if labels that can be programmatically 
determined)
2.5.4 Motion Actuation (may detect motion actuation events but would 
need verification if alternatives exist)
3.3.2 Labels or Instrcutions (can detect inputs without linked labels 
but not if labels are meaningful)
4.1.2 Name, Role, Value (detects inconsistencies such as parent/child 
errors but not probably not cases where rules / attributes should be 
used but are missing?)

I am investigating this in the context of determining to what extent the 
"simplified monitoring" method of the EU Web Directive can rely on 
fully-automated tests for validly demonstrating non-conformance - see 
the corresponding article 
https://team-usability.de/en/teamu-blog-post/simplified-monitoring.html

Are there any fully-automated tests beyond 1.4.3, 3.1.1 and 4.1.1 that I 
have missed?

Best,
Detlev

-- 
Detlev Fischer
Testkreis
Werderstr. 34, 20144 Hamburg

Mobil +49 (0)157 57 57 57 45

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites
Received on Tuesday, 20 August 2019 13:46:13 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 21:08:31 UTC