W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > April to June 2013

(wrong string) ‚€œIneffective‚€, PhD thesis Claims

From: Alastair Campbell <alastc@gmail.com>
Date: Mon, 3 Jun 2013 09:55:41 +0100
Message-ID: <CAC5+KCGpsX8k6j4wTX_=3rinwgVcgtexOoVr-MY1=HYQxnKVWw@mail.gmail.com>
To: Steve Green <steve.green@testpartners.co.uk>
Cc: WAI Interest Group <w3c-wai-ig@w3.org>
Hi Steve,

I think in general we are in vociferous agreement! ;-)

Steve Green wrote:
> In general it is more costly and logistically more difficult to conduct user testing with PWD than it is for fully-able users, and also it is more difficult to interpret the results because there are more factors involved.

Generally true, but having done so much of it now the increase in cost
is relatively small in the scheme of things. Extra costs tend to be
transport and perhaps a BSL interpreter, but it is not that much
extra. We like to have regular user-research people facilitate the
testing and create the list of issues, but then work with an
accessibility expert (developer) to come up with recommendations. That
keeps the results honest and the recommendations useful.

> Whether it is a new build or an audit of an existing website (as all those in the study were) it is most efficient to conduct user testing with fully able people, fix any issues arising from that and then conduct user testing with PWD.

Taking a step back, testing is not the only way. UCD projects should
include many aspects of user-research before you ever get to testing.
Surveys, card sorts, etc, depending on the needs of the project.

Also, we sometimes have clients who have conducted lots of usability
testing already and are looking for the next level of improvements,
and PWD often uncover issues that do affect most people  to some
degree, but are easy to miss in regular testing.

I'm sure you agree that it depends on the state of the site and the
needs of the project.

> It is worth bearing in mind that user testing only assesses the usability and accessibility of the selected scenarios and the paths the participants choose to take through the website. For this reason it is essential to conduct WCAG audits and expert reviews that methodically go through the whole website (or as much as is practical). This is why I believe the study's conclusion is incorrect.

Sure, but if those scenarios are key ones (e.g. "buy something" on an
ecommerce site) they provide critical data, and are helpful to
prioritise the changes. Like an audit which takes a sample of pages,
it is then up to the researcher / developer to generalise the findings
and make best use of resources to improve the site.


Received on Monday, 3 June 2013 08:56:12 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:36:44 UTC