RE: International Web Access Guidelines “Ineffective”, PhD thesis Claims

Hi all

I think we're all in agreement about the importance of user testing.  I also agree that user testing is by its very nature different than technical WCAG 2.0 compliance testing.  What is recommended and research verifies is that you should use a combination of automated, manual expert (technical) and user testing to get the best picture of the accessibility and usability of a website.  That's how we do it and have found that the user testing complements the technical testing.

When we compare the coverage of the combination of methods, we find that technical evaluation will find just what it should - the technical point by point areas of a website/page that don't meet the guidelines.  The user testing shows to the website owner just how those violations affect people with disabilities.  Our user testing team partner (the Digital Accessibility Centre http://www.digitalaccessibilitycentre.org/) provides this service for us and others in an excellent manner, covering a host of different disability needs.

The importance of a robust sample of pages, if the whole website is not being tested page by page, is critical.  We select the page sample really carefully and the user testing group concentrate on most used pages and critical paths.  Again, it implies that the testing team understand the purpose of the website and how people will use it.




Regards

Vivienne L. Conway, B.IT(Hons), MACS CT, AALIA(CS)
PhD Candidate & Sessional Lecturer, Edith Cowan University, Perth, W.A.
Director, Web Key IT Pty Ltd.
v.conway@ecu.edu.au
v.conway@webkeyit.com
Mob: 0415 383 673

This email is confidential and intended only for the use of the individual or entity named above. If you are not the intended recipient, you are notified that any dissemination, distribution or copying of this email is strictly prohibited. If you have received this email in error, please notify me immediately by return email or telephone and destroy the original message.
________________________________________
From: Alastair Campbell [alastc@gmail.com]
Sent: Monday, 3 June 2013 4:55 PM
To: Steve Green
Cc: WAI Interest Group
Subject: Re: International Web Access Guidelines “Ineffective”, PhD thesis Claims

Hi Steve,

I think in general we are in vociferous agreement! ;-)

Steve Green wrote:
> In general it is more costly and logistically more difficult to conduct user testing with PWD than it is for fully-able users, and also it is more difficult to interpret the results because there are more factors involved.

Generally true, but having done so much of it now the increase in cost
is relatively small in the scheme of things. Extra costs tend to be
transport and perhaps a BSL interpreter, but it is not that much
extra. We like to have regular user-research people facilitate the
testing and create the list of issues, but then work with an
accessibility expert (developer) to come up with recommendations. That
keeps the results honest and the recommendations useful.


> Whether it is a new build or an audit of an existing website (as all those in the study were) it is most efficient to conduct user testing with fully able people, fix any issues arising from that and then conduct user testing with PWD.

Taking a step back, testing is not the only way. UCD projects should
include many aspects of user-research before you ever get to testing.
Surveys, card sorts, etc, depending on the needs of the project.

Also, we sometimes have clients who have conducted lots of usability
testing already and are looking for the next level of improvements,
and PWD often uncover issues that do affect most people  to some
degree, but are easy to miss in regular testing.

I'm sure you agree that it depends on the state of the site and the
needs of the project.


> It is worth bearing in mind that user testing only assesses the usability and accessibility of the selected scenarios and the paths the participants choose to take through the website. For this reason it is essential to conduct WCAG audits and expert reviews that methodically go through the whole website (or as much as is practical). This is why I believe the study's conclusion is incorrect.

Sure, but if those scenarios are key ones (e.g. "buy something" on an
ecommerce site) they provide critical data, and are helpful to
prioritise the changes. Like an audit which takes a sample of pages,
it is then up to the researcher / developer to generalise the findings
and make best use of resources to improve the site.

Cheers,

-Alastair

This e-mail is confidential. If you are not the intended recipient you must not disclose or use the information contained within. If you have received it in error please return it to the sender via reply e-mail and delete any record of it from your system. The information contained within is not the opinion of Edith Cowan University in general and the University accepts no liability for the accuracy of the information provided.

CRICOS IPC 00279B

Received on Thursday, 13 June 2013 23:54:34 UTC