Re: possible use of test assertions in defining/expressing requirements?

Hi Detlev, all,


On 2011-09-08, at 11:24 AM, Detlev Fischer wrote:

> <snip />
> The one thing where we have taken a different approach is screen reader / AT tests, both for conceptual and practical reasons.
> 
> Conceptual: AT output and accessibility support varies a lot across AT, which limits the relevance of any specific test. Of course one could define a reference instalation of UA/AT but results will often not carry over to the majority of the installed base, especially regarding all the dynamic stuff / and less well supported wai-aria properties.

This is true. Best case scenario would be to include user testing with various folks, that have various disabilities and use different tools with different levels of expertise, but we all know this is unrealistic in most projects. This is why we only "bother" doing testing with screen readers (mostly Jaws, NVDA and VoiceOver, but also sometimes Orca) and sometimes zoomText also. 

When doing user testing or functional evaluation using assistive technologies (in general), what we really want to know boils down to two things: 

* is the interface keyboard accessible, and 
* are the content/functionalities picked up by the ATs and usable as they would be for other, non-disabled users. 

I believe we can get a reasonably good idea on those using only a variety of screen readers, which is why we don't go any further (of course, not knowing braille and not having access to other technologies motor impaired users also plays a critical role in this decision). 

So I tend to believe that while this is far from perfect, it is still acceptable. In no way do I mean to minimize the relevancy of testing for other disabilities or imply that web accessibility is all about blind people because of course it's not. It really should cover more broadly, but given the context and the limited resources, I believe it's sufficient.


> Practical: Our test just requires accessibility knowledge and HTML/CSS skills and uses free tools. No JAWS licence and skills needed. Extending a test to require a working knowledge of AT raises the bar considerably and/or limits the pool of testers that will be qualified enough to do both expert test and screen reader test. Or you have two separate tests, one carried out by the expert and one by a screen reader user, and a somewhat tricky mapping is needed to reconcile and combine the results. Doable, but quite complex & expensive.

We came up with a "formula" that allows us to have non WCAG experts do evaluation work too, but it's never the same as when real experts do them. 

Running a test is simple and anybody using the Accessibility toolbar in MSIE can go a long way, but understanding how to run it properly and insightfully is another matter entirely. 

In our case, when the resource evaluating is less proficient in accessibility, then it means more work from a second expert, to validate the findings and finalize the reports. 

And that usually means at least as much work, if not more...


> And cost is an important aspect: just looking at our own test, a full tandem test costs 1200 Euro or more (and that barely covers the actual effort at acceptable rates). Many organisations interested in testing do not have big budgets, so making the test more complex by mandating AT tests has the drawback of increasing cost. The effect would be that even more orgs that would like a test don't do it simply because they cannot afford it. This already happens at the current cost level.

Right. But then again, testing takes time and a very rare expertise on the market, so it's definitely worth something. I try not to undercut myself and we usually run at about 300$ CDN per page, because otherwise it sends the wrong message again.

Maybe we can come up with something in this TF that will allow us to be much more productive and therefore, make this expertise more cost-effective and affordable.

/Denis

Received on Thursday, 8 September 2011 16:52:00 UTC