Auto-WCAG - Expert system approach?

Hi Wilco, All,

In the July 2015 Auto-WCAG blog - https://www.w3.org/community/auto-wcag/2015/07/24/introducing-the-auto-wcag-user-input-template/, under Next steps I was reading that:

“Some participants of the auto-wcag community group are currently implementing the prototype of a User Testing Tool based on the questions developed in the structured approach described in this post. The tool runs in the user’s web browser and connects to a database storing the user input.”

Out of interest, could I ask which participants are working on this “expert-system” tool? And, if work is still under way?
I too developed an interview based expert-system ages ago – for testing the accessibility of a web page (thankfully they were more static back then).

With all such systems you call your tests “rules”, and you follow a very similar grammar to the one proposed in Auto-WCAG.  I used Jess formatting initially (http://herzberg.ca.sandia.gov/), then developed my own system…
I finalised my expert system some years ago – it looked at WCAG 1.0 AA.  I demoed it to several organisations, and got some good reviews!
The issue was that although an interesting way to proceed – only when you actually used it for commercial audits did you realize how slow such as process is.  The same questions have to be asked again and again of the user – for example, for each img node – which is overkill if you are only looking to find enough faults to show something is an issue.

For example, http://wilcofiers.github.io/auto-wcag/rules/SC1-1-1-text-alternative.html - Contains questions you need to ask the user about each image – “Is this element solely for decorative purposes”?

With actual implementation knowledge, it is certainly not an approach I would suggest for large-scale monitoring purposes, as it simply takes too long to assess each page looked at; and requires human judgement which can be wildly different.   Auto-WCAG tests, being formatted in a very specific way, also will not slip easily into other testing platforms.
My understanding was that we were concentrating on developing fully automatic tests – which could be plugged into any testing platform – the output from which could easily be compared.

With manual steps in a number of the current tests, which also include design constraints such as “Presented item - Web page (with title either highlighted or in a seperate textbox)”, I think we are making it hard for ourselves to achieve the comparability goal; or even create tests that achieve AUTO-WCAG’s desired aims.

It would only take a short amount of time to re-assemble the current “rules” into sets of atomic fully-automated tests – by leaving the manual testing steps aside; and I wonder if this isn’t the direction we should be moving in instead – and may prove significantly quicker.  Which, I also should mention seems to have been the approach of the EIII project from which Auto-WCAG was initially born (http://checkers.eiii.eu/en/tests/).

My question to the group is “are we developing Auto-WCAG rules for an expert system tool”? and, if yes – why exactly?

I’d be very interested to discuss the above, and hear comments from the whole group.
All the best
Alistair
---
Alistair Garrison
Senior Accessibility Engineer
SSB Bart Group

Received on Tuesday, 25 October 2016 12:08:08 UTC