Re: Measurability in Silver

Hi everyone,

Gosh, I go away for a weekend and the discussion gets really interesting!

I’d like to make three relatively brief points:


  1.  To echo Mark, it would be really useful to have some examples where a team pulled a good feature due to WCAG conformance. Even a generalised, non-company specific example, just what the principle was that we can look out for.
I personally haven’t come across that, except perhaps in cases where it was a good feature for a specific audience, but made it worse for another audience.

  2.  From talking to Jeanne and Shawn: Where they have been discussing “usability testing”, that has NOT always meant bums-on-seats testing with participants. This confused me, as user/usability testing to me has always meant testing with people.
Apparently things like Cognitive Walkthrough are also on the table as potential methods.
http://www.usabilityfirst.com/usability-methods/cognitive-walkthroughs/


Can we call these “user-centred design methods” if that’s it what is intended?

  3.  Charles’ question: Can a qualitative result be accepted as a measurable and non-binary “pass”?

Qualitative, maybe, we’d need to define the method. For usability testing, I’m very sceptical that it could be used directly, for the reasons John & David mentioned.

However, I do see a place for it as a process point. I think Stein Erik and Leonie made similar points, where it could be a case of: For this Gold criteria you need to do usability testing / IA testing on your navigation. A ‘pass’ is evidence that you used the method and acted on the results.

That’s the top-line, a few details for David, Jeanne & Charles below.

-Alastair


David wrote:
> If we are going to make a major change to the way we create a standard,

Not the question! The creation process is not different (i.e. W3C process), the requirements for the standard are different.

> the reasons we made decisions and problems we had with specific directions … or at least know which paths don't work, with view to find paths that do work.

The thing is that if you change the ground rules (e.g. measurable rather than true/false, task-based rather than page based, guidelines focused on user-need rather than content) then things which were dead-ends before may not be dead ends now.

> I think the AG team should have access to the research details and have the ability to dig as deep as necessary to find out what they real issue was …

There is no "AG team", we have a AG working group and the Silver TF is a part of that group.
Broadly, the AGWG should and does have access to the research:
https://www.w3.org/WAI/GL/task-forces/silver/wiki/Main_Page#Silver_Research


Obviously there are very specific examples which may not be reported because you can't simply report everything.
Where there are specific and useful things, let's ask the questions, but be prepared for a generalised answer.

Jeanne wrote:
> The question we are discussing is: when an automated or manual test from an auditor says that something fails, and testing with people with disabilities say that it is accessible, would the result from testing with people with disabilities be sufficient to say that it passes?

Allowing for that approach would indicate that the criteria is not right in the first place, we should avoid that! Also, where guidelines are most helpful (compared to general UX) is to get past optimizing for average users.

Usability testing is great for optimizing for a task, and including people with disabilities is really helpful to find more ways of looking at the same problem. However, it is an awful way for working out that something is barrier-free unless you have a medical scale participation group.

I could see an argument for that for a large organisation, for example, a Government that has its own design standards could test something with thousands of users over time, and say “We’ll have an exception for this”. However, it feels like the kind of thing that should then be backed into the standard as a part of the guidelines rather than allowing for exceptions.

Received on Tuesday, 13 November 2018 12:52:25 UTC