W3C home > Mailing lists > Public > public-silver@w3.org > April 2019

RE: Heuristic testing

From: Alastair Campbell <acampbell@nomensa.com>
Date: Thu, 25 Apr 2019 09:30:57 +0000
To: "Hall, Charles (DET-MRM)" <Charles.Hall@mrm-mccann.com>
CC: Silver Task Force <public-silver@w3.org>
Message-ID: <DBBPR09MB3045BF8E34C1962DA5A97463B93D0@DBBPR09MB3045.eurprd09.prod.outlook.com>
Hi Charles,

Thank you for the extra context, and I hope it didn’t come across as negative. I do appreciate the thought-process, I was just worried about how it might be taken.

Taking a little step back to consider the various UX/User Centered Design methods, I’ve long been of the opinion that:

  *   UCD is good for optimising for the majority of people within a particular context / domain.
  *   Accessibility guidelines (so far) have been good for ensuring that interface works for the most people possible.

In our work UX tends to lead accessibility, so you define a good solution for the task, then make sure it is as robust & accessible as possible. (They aren’t separate, but iterative. Oh, and obviously people with disabilities are part of the user-research, but we work out the task first, then the interface.)

Where the UCD methods shine is dealing with the context of the problem, and getting out of your own mindset & assumptions.

That means they are method to get to a more optimal solution, but not a way to compare solutions. That’s a really tough problem as the context matters hugely, which is something that world-wide guidelines cannot take account of.

As a quick example of ‘context’ differences, the main UX problems you work on in e-commerce are Information Architecture based, such as how to display 10,000 products in a way that people can navigate to what they want. Whereas something like web-based email is much more of an interface problem.

> The general idea would simply be to encourage practices that go beyond the minimum, but not require them.

In that context I can see at least one way forward then, where there are a set of guidelines oriented around usability/IA that are process-based.

For example, the guideline could be (quick hypothetical example):

  *   Users can understand and use navigation elements which have more than 10 options.

The method(s) would be process based, like ISO 27001 where you essentially self-mark but have to show improvement each year.

For example:

  *   Conduct a card-sorting exercise to establish the best groupings and terms for the navigation.
  *   Conduct a menu test to optimise the terms used in the navigation.
  *   Conduct a heuristic evaluation of the navigation’s placement and design.

The ‘conformance’ for each of these is that you record that this method has been used, and perhaps what changes you made as a result, or even that you made changes as a result.

Then Silver is not trying to define a ‘good’ or replicable result across the multitude of different websites, but provide a way of scoring higher for organisations following best-practice UCD. In the context of ‘going above the baseline’, that makes sense to me.

I think it also helps to have these tasks as methods under particular guidelines, rather than as an overall methodology for testing all the guidelines. Then they could mix with some baseline methods from WCAG 2.x as well, with these methods there for higher scoring.



Received on Thursday, 25 April 2019 09:31:24 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:45 UTC