- From: Michael S Elledge <elledge@msu.edu>
- Date: Thu, 08 Sep 2011 14:54:56 -0400
- To: public-wai-evaltf@w3.org
Hi Everyone-- I'm new to the group and haven't gone through all the posts yet, so I hope that I'm not being redundant. I think Denis' comments are well-taken. We find that doing a combination of tests helps us identify accessibility issues. We start by going through a page using the keyboard only, then test its functionality using JAWS. One of my colleagues in the Big Ten uses NVDA instead, suggesting that JAWS' greater sophistication enables it to figure out how a site should perform, whereas NVDA is a better litmus test for how it will in most circumstances. I haven't tested this, but it's worth considering. Our protocol for keyboard and JAWS testing is straight-forward and shouldn't require much training. Once we perform the keyboard and JAWS reviews, we go through a protocol using MSIE WAT and/or Firefox with the Developer and Accessibility extensions. This was pretty cut-and-dried under WCAG 1.0; WCAG 2.0, unfortunately, requires a greater level of manual testing and consideration. It will be more difficult to develop a protocol for lay persons. The final step is to run the page through an automated checker (generally WAVE) to see if anything comes up we haven't previously identified. Our inclusion of persons with disabilities in formal usability testing is contingent on our clients. It is significantly more expensive so rarely requested. I hope this provides some additional context. Best regards, Mike Elledge Usability/Accessibility Research and Consulting Michigan State University On 9/8/2011 12:51 PM, Denis Boudreau wrote: > Hi Detlev, all, > > > On 2011-09-08, at 11:24 AM, Detlev Fischer wrote: > >> <snip /> >> The one thing where we have taken a different approach is screen reader / AT tests, both for conceptual and practical reasons. >> >> Conceptual: AT output and accessibility support varies a lot across AT, which limits the relevance of any specific test. Of course one could define a reference instalation of UA/AT but results will often not carry over to the majority of the installed base, especially regarding all the dynamic stuff / and less well supported wai-aria properties. > This is true. Best case scenario would be to include user testing with various folks, that have various disabilities and use different tools with different levels of expertise, but we all know this is unrealistic in most projects. This is why we only "bother" doing testing with screen readers (mostly Jaws, NVDA and VoiceOver, but also sometimes Orca) and sometimes zoomText also. > > When doing user testing or functional evaluation using assistive technologies (in general), what we really want to know boils down to two things: > > * is the interface keyboard accessible, and > * are the content/functionalities picked up by the ATs and usable as they would be for other, non-disabled users. > > I believe we can get a reasonably good idea on those using only a variety of screen readers, which is why we don't go any further (of course, not knowing braille and not having access to other technologies motor impaired users also plays a critical role in this decision). > > So I tend to believe that while this is far from perfect, it is still acceptable. In no way do I mean to minimize the relevancy of testing for other disabilities or imply that web accessibility is all about blind people because of course it's not. It really should cover more broadly, but given the context and the limited resources, I believe it's sufficient. > > >> Practical: Our test just requires accessibility knowledge and HTML/CSS skills and uses free tools. No JAWS licence and skills needed. Extending a test to require a working knowledge of AT raises the bar considerably and/or limits the pool of testers that will be qualified enough to do both expert test and screen reader test. Or you have two separate tests, one carried out by the expert and one by a screen reader user, and a somewhat tricky mapping is needed to reconcile and combine the results. Doable, but quite complex& expensive. > We came up with a "formula" that allows us to have non WCAG experts do evaluation work too, but it's never the same as when real experts do them. > > Running a test is simple and anybody using the Accessibility toolbar in MSIE can go a long way, but understanding how to run it properly and insightfully is another matter entirely. > > In our case, when the resource evaluating is less proficient in accessibility, then it means more work from a second expert, to validate the findings and finalize the reports. > > And that usually means at least as much work, if not more... > > >> And cost is an important aspect: just looking at our own test, a full tandem test costs 1200 Euro or more (and that barely covers the actual effort at acceptable rates). Many organisations interested in testing do not have big budgets, so making the test more complex by mandating AT tests has the drawback of increasing cost. The effect would be that even more orgs that would like a test don't do it simply because they cannot afford it. This already happens at the current cost level. > Right. But then again, testing takes time and a very rare expertise on the market, so it's definitely worth something. I try not to undercut myself and we usually run at about 300$ CDN per page, because otherwise it sends the wrong message again. > > Maybe we can come up with something in this TF that will allow us to be much more productive and therefore, make this expertise more cost-effective and affordable. > > /Denis > > >
Received on Thursday, 8 September 2011 18:55:36 UTC