Re: Automated and manual testing process

Michael wrote:

> If we exclude new SCs related to how well people understand content, just because understandability is difficult to automatically test



That wasn’t the question (originally or in follow ups), the question is whether ‘understandabillity’ is possible to test reliably in any form, manual or otherwise.



Unless we can provide clear enough material within each guideline and understanding document that ‘regular’ designers, developers and testers can then conduct testing and get similar results, I don’t think it does fit in the WCAG 2.x framework.



Also, given that the guidelines are supposed to be applicable to any website, how do I assess that for our client that publishes journal articles about cutting-edge physics?



Context is a key factor in usability, so for sites which do not have a general public audience (or public service responsibilities) applying some understandability guidelines based on the general public would make their sites demonstrably worse for their target users, without benefiting people with disabilities.





> then cognitive accessibility will continue to be poorly represented in WCAG.



That is to some degree inevitable in 2.1, in the current framework each SC needs to be testable with some reliability, not 100%, but with a good expectation of getting similar results across testers. I think some improvements can be made, but bringing in usability/understandability as part of SCs makes objectivity very difficult.



This came up in a previous discussion [1) where I said (slightly edited):

> The way to make something usable is to go through a certain process, it isn’t something you can write (effective) rules or guidelines for. The more you learn about it, the more ‘usability guidelines’ don’t fit many scenarios. It is like the physics ‘universal equation’ – so general it doesn’t help you build a bridge or a trebuchet.

> For me the question is whether WCAG can or should mandate a UCD process, or come up with a more usability-testing based approach.”



Which I think is an excellent question for Silver, but sliding usability testing/review into a few 2.1 SCs isn’t going to work.





> Whereas today we may have to rely on subjective testing to assess these softer concepts, with the advances in machine learning it is probable that more ways of automatically assessing these concepts will emerge. It would be good to avoid the situation where we have efficient ways of testing these concepts but have nothing in WCAG that relates to them.



The recent advances in AI are interesting, but from what I understand they are not trying to recreate human intelligence, they are trying to come up with new forms of intelligence that are better at solving particular problems.



In which case you will still need human to interpret how human will interpret things. I’m sure they’ll be able to pick off some low hanging fruit, but until we all ‘jack-in’ I think humans will be needed to create humane interfaces.



Cheers,



-Alastair



1] https://lists.w3.org/Archives/Public/w3c-wai-gl/2015JulSep/0037.html

Received on Monday, 30 January 2017 15:18:22 UTC