Re: Heuristic testing

All great input.

Alastair,

If the UCD process were a method, do you see any way in which that could include all the functional needs?

Detlev / Alastair,

I did not mean to conflate the use of these evaluation methods that implies the same method would be used for both design and conformance. My general idea is that the minimum conformance can be achieved without requiring any specific human evaluation method that included people with specific functional needs. But any higher level conformance would require some method – probably task based – that could identify gaps that the guideline > method(s) > test(s) missed for a given functional need. Encouraging practice improvements at the design stage is also important, but may not be the same evaluation / UCD methods.

I think it is always easier to start with an example problem.

Let’s say the author has created a tab bar navigation. It passes all tests in the guideline. It has clear simple language labels with icons and accessible names. It is semantically a link list inside of a nav. It is navigable by keyboard and has clear focus order and states. Each link has a sufficient contrast and touch target. Everything the guidelines said. But then it gets tested by and/or for the functional needs of fine motor control and half of the tests fail because even though the target size was sufficient, it was found that there were too many accidental clicks due to the proximity and insufficient space between. So, the author modifies the already accessible pattern to close that extra gap. This is the goal of recommending such evaluation methods. Better human consideration should receive a higher score than just ‘passes tests’.

I anticipate that everyone is going to comment that “tested by and/or for the functional needs of fine motor control” represents a significant challenge and cost for most organizations. Given that is a plausible constraint, let’s solve for that as well – but secondary to solving for humans.


Charles Hall // Senior UX Architect

charles.hall@mrm-mccann.com<mailto:charles.hall@mrm-mccann.com?subject=Note%20From%20Signature>
w 248.203.8723
m 248.225.8179
360 W Maple Ave, Birmingham MI 48009
mrm-mccann.com<https://www.mrm-mccann.com/>

[MRM//McCann]
Relationship Is Our Middle Name

Ad Age Agency A-List 2016, 2017, 2019
Ad Age Creativity Innovators 2016, 2017
Ad Age B-to-B Agency of the Year 2018
North American Agency of the Year, Cannes 2016
Leader in Gartner Magic Quadrant 2017, 2018, 2019
Most Creatively Effective Agency Network in the World, Effie 2018, 2019



From: Detlev Fischer <detlev.fischer@testkreis.de>
Date: Thursday, April 25, 2019 at 6:09 AM
To: Silver Task Force <public-silver@w3.org>
Subject: [EXTERNAL] Re: Heuristic testing
Resent-From: Silver Task Force <public-silver@w3.org>
Resent-Date: Thursday, April 25, 2019 at 6:08 AM

Hi Alastair, Charles, list,

I still feel uneasy about including methods used by an organisation at the design )or re-design) stage in a conformance evaluation. There are several problems with that:

  1.  Evaluators will often not have the domain knowledge to assess, say, whether a grouping of navigation items (e.g. products) works well for the target audience (think a chemicals supplier)
  2.  An expert might arrive at as a good a navigation structure as a group that went through a card-sorting excercise (if one were to carry out user testing to assess the quality of the result) - why should the fact that the structure was arrived in card sorting lead to a higher score if what counts is the accessibility/usability of the site for the end user?
  3.  The fact that changes were made as a result of testing is the back story of the site being conformance-evaluated - for the user it has no impact on the actual user experience of the content  used. So why should it appear in a conformance result? (I do not mind - actually welcome - it those measures appear in another kind of rubic that may be labeled "accessible organisational processes", "proactive organisation", "company digs accessibility" or whatever).

I think there must be a clear separation of a conformance score that can be arrived at ideally by any external evaluator based on published techniques and common tools, without the need for domain knowledge and without access to company internals, and something that for want of a better word I will call a 'proactivity score', which is derived from insight into the organisation's internal processes. These may be shown as "stacking up" - conformance leading to "bronze" and the 'proactivity score' adding points for "silver" and finally "gold" - but I would personally prefer a side-by-side presentation to make it clear that this is addressing different aspects - site properties on the one hand, organisational properties on the other.

Just to be clear: I welcome extending the scope of conformance testing to things that are 'hard(er) to measure' but still do not rely on knowledge of company internals. These aspects might include things such as "proximity of related information" or "Concise navigation structure" (e.g., not more than x elements per level, consistent display of hierarchy in nested structures and of process steps in processes, etc - things that may often enter assessments in things like 3.2.3 Consistent Navigation already, but may not be explicitly measured). For these, the challenge will be to find a way to integrate measurement scales with the current PASS/FAIL approach.

Detlev
Am 25.04.2019 um 11:30 schrieb Alastair Campbell:
Hi Charles,

Thank you for the extra context, and I hope it didn’t come across as negative. I do appreciate the thought-process, I was just worried about how it might be taken.

Taking a little step back to consider the various UX/User Centered Design methods, I’ve long been of the opinion that:

  *   UCD is good for optimising for the majority of people within a particular context / domain.
  *   Accessibility guidelines (so far) have been good for ensuring that interface works for the most people possible.

In our work UX tends to lead accessibility, so you define a good solution for the task, then make sure it is as robust & accessible as possible. (They aren’t separate, but iterative. Oh, and obviously people with disabilities are part of the user-research, but we work out the task first, then the interface.)

Where the UCD methods shine is dealing with the context of the problem, and getting out of your own mindset & assumptions.

That means they are method to get to a more optimal solution, but not a way to compare solutions. That’s a really tough problem as the context matters hugely, which is something that world-wide guidelines cannot take account of.

As a quick example of ‘context’ differences, the main UX problems you work on in e-commerce are Information Architecture based, such as how to display 10,000 products in a way that people can navigate to what they want. Whereas something like web-based email is much more of an interface problem.


> The general idea would simply be to encourage practices that go beyond the minimum, but not require them.

In that context I can see at least one way forward then, where there are a set of guidelines oriented around usability/IA that are process-based.

For example, the guideline could be (quick hypothetical example):

  *   Users can understand and use navigation elements which have more than 10 options.

The method(s) would be process based, like ISO 27001 where you essentially self-mark but have to show improvement each year.

For example:

  *   Conduct a card-sorting exercise to establish the best groupings and terms for the navigation.
  *   Conduct a menu test to optimise the terms used in the navigation.
  *   Conduct a heuristic evaluation of the navigation’s placement and design.

The ‘conformance’ for each of these is that you record that this method has been used, and perhaps what changes you made as a result, or even that you made changes as a result.

Then Silver is not trying to define a ‘good’ or replicable result across the multitude of different websites, but provide a way of scoring higher for organisations following best-practice UCD. In the context of ‘going above the baseline’, that makes sense to me.

I think it also helps to have these tasks as methods under particular guidelines, rather than as an overall methodology for testing all the guidelines. Then they could mix with some baseline methods from WCAG 2.x as well, with these methods there for higher scoring.

Cheers,

Alastair




--

Detlev Fischer

Testkreis

Werderstr. 34, 20144 Hamburg



Mobil +49 (0)157 57 57 57 45



http://www.testkreis.de<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.testkreis.de&d=DwMDaQ&c=Ftw_YSVcGmqQBvrGwAZugGylNRkk-uER0-5bY94tjsc&r=FbsK8fvOGBHiAasJukQr6i2dv-WpJzmR-w48cl75l3c&m=y7PgoldSFAZxwIESsuT62S64PLpGrgiZo8l0YeXNE7w&s=pw8m6kWbqgj9RQjKxFFXujdolxhQuszFse7hox4wy3o&e=>

Beratung, Tests und Schulungen für barrierefreie Websites

This message contains information which may be confidential and privileged. Unless you are the intended recipient (or authorized to receive this message for the intended recipient), you may not use, copy, disseminate or disclose to anyone the message or any information contained in the message.  If you have received the message in error, please advise the sender by reply e-mail, and delete the message.  Thank you very much.

Received on Thursday, 25 April 2019 12:07:56 UTC