Re: AW: AW: evaluating web applications (was Re: Canadian Treasury Board accessibility assessment methodology)

Hi Everyone--

A couple of thoughts with respect to rating accessibility.

I think one of the problems we're having is creating objective measures 
for an occasionally subjective evaluation. I use the term "occasionally" 
because some criteria can deliver binary (yes/no, true/false) results.

That said, it seems to me that adding additional levels of subjectivity 
to such an evaluation (example: a Likert scale) only compounds the 
problem. As much as I find a 100 point scale appealing, especially 
weighting items according to their significance, that system will, I 
believe, be even more subject to varying interpretation.

On the other hand, I'm not sure that the role of the EVALTF is to decide 
which particular approach(-es) should be used, but instead to ensure 
that whichever approach is used meet basic evaluation criteria (such as 
replicability and transparency). This will encourage people to develop 
differing, but equally valid approaches to measuring accessibility 
compliance, which is ultimately a benefit to everyone.

The most important outcome of an evaluation, it seems to me, is not to 
create a score per se', but to identify where there are accessibility 
issues so they can be repaired during development or, after release, 
taken into account by users, particularly persons with disabilities.

Thoughts?

Mike

On 5/23/2012 5:09 AM, Aurélien Levy wrote:
> Hi,
>
> there is also another thing to consider. Maybe we will achieve someday 
> to have a perfect way to measure accessibility but at the end if the 
> time needed to get this score is three, five, ten times longer than 
> the basic conformance metrics, I'm not sure it's really useful.
> Yes you will get a more precise score regarding the "real" 
> accessibility of your website and then so what ? The time needed to 
> improve it is still the same regardless of the quality of your metrics.
> Most of people already see accessibility as a cost, I prefer they 
> spent there money/time on improving there website than in making in 
> depth audit just to get the most accurate metrics.
>
> What we really need is :
> - mutual methodology
> - mutual cost efficient metric
> - mutual testcases
> - mutual tests
>
> With all that we can start making comparison between tools/expert/etc 
> to improve ourself
>
> Regards,
>
> Aurélien Levy
> ----
> Temesis CEO
>> Hi Kerstin,
>>
>> As expressed in the paper, the statistics function has only recently 
>> been added. So at the moment, this is an informal assessment which we 
>> will need to back up once we have more data.
>>
>> But this is what we hope to get out of the stats function:
>>
>> 1. Tester reliability over time: How much are individual evaluators 
>> 'off the mark' compared to the final quality-assured result? This 
>> could show an improvement over time, an interesting metrics to assess 
>> the level of qualification especially of new and less experienced 
>> evaluators.
>>
>> 2. Inter-evaluator reliability: How close are the results of 
>> different evaluators assessing the same site / page sample?
>>
>> There is likely to be little on test-retest reliability data since 
>> usually, the sites tested are a moving target - improved based on 
>> test results. Only rarely the same site is re-tested in a tandem test 
>> - this usually only happens after a re-launch.
>>
>> A fundamental problem in all those statistics is that there is no 
>> objective benchmark to compare individual rating results against - 
>> just the arbitrated and quality assured final evaluation result. 
>> Given the scope of interpretation in accessibility evaluation, we 
>> think this lack of objectivity is inevitable and in the end, down to 
>> the complexity of the field under investigation and the degree of 
>> human error in all evaluation.
>>
>>
>> -- 
>> Detlev Fischer
>> testkreis c/o feld.wald.wiese
>> Borselstraße 3-7 (im Hof), 22765 Hamburg
>>
>> Mobil +49 (0)1577 170 73 84
>> Tel +49 (0)40 439 10 68-3
>> Fax +49 (0)40 439 10 68-5
>>
>> http://www.testkreis.de
>> Beratung, Tests und Schulungen für barrierefreie Websites
>>
>
>
>

-- 
Michael S. Elledge
Associate Director
Usability/Accessibility Research and Consulting
Michigan State University
Kellogg Center
219 S. Harrison Rd Room 93
East Lansing, MI  48824
517-353-8977

Received on Wednesday, 23 May 2012 14:38:30 UTC