Re: AW: AW: evaluating web applications (was Re: Canadian Treasury Board accessibility assessment methodology)

Hi,

there is also another thing to consider. Maybe we will achieve someday 
to have a perfect way to measure accessibility but at the end if the 
time needed to get this score is three, five, ten times longer than the 
basic conformance metrics, I'm not sure it's really useful.
Yes you will get a more precise score regarding the "real" accessibility 
of your website and then so what ? The time needed to improve it is 
still the same regardless of the quality of your metrics.
Most of people already see accessibility as a cost, I prefer they spent 
there money/time on improving there website than in making in depth 
audit just to get the most accurate metrics.

What we really need is :
- mutual methodology
- mutual cost efficient metric
- mutual testcases
- mutual tests

With all that we can start making comparison between tools/expert/etc to 
improve ourself

Regards,

Aurélien Levy
----
Temesis CEO
> Hi Kerstin,
>
> As expressed in the paper, the statistics function has only recently been added. So at the moment, this is an informal assessment which we will need to back up once we have more data.
>
> But this is what we hope to get out of the stats function:
>
> 1. Tester reliability over time: How much are individual evaluators 'off the mark' compared to the final quality-assured result? This could show an improvement over time, an interesting metrics to assess the level of qualification especially of new and less experienced evaluators.
>
> 2. Inter-evaluator reliability: How close are the results of different evaluators assessing the same site / page sample?
>
> There is likely to be little on test-retest reliability data since usually, the sites tested are a moving target - improved based on test results. Only rarely the same site is re-tested in a tandem test - this usually only happens after a re-launch.
>
> A fundamental problem in all those statistics is that there is no objective benchmark to compare individual rating results against - just the arbitrated and quality assured final evaluation result. Given the scope of interpretation in accessibility evaluation, we think this lack of objectivity is inevitable and in the end, down to the complexity of the field under investigation and the degree of human error in all evaluation.
>
>
> --
> Detlev Fischer
> testkreis c/o feld.wald.wiese
> Borselstraße 3-7 (im Hof), 22765 Hamburg
>
> Mobil +49 (0)1577 170 73 84
> Tel +49 (0)40 439 10 68-3
> Fax +49 (0)40 439 10 68-5
>
> http://www.testkreis.de
> Beratung, Tests und Schulungen für barrierefreie Websites
>

Received on Wednesday, 23 May 2012 09:09:51 UTC