Re: Perfomance score per instance

Hello,

(I'm  part of the french team lead by Sylvie Duchateau from Braillenet, 
who gave feedback this spring/summer. I'm not used to W3C habits, so if 
this email is inappropriate, don't hesitate to tell me.)

I think the 3 performance scores are a tremendous concept. The 
combination of those 3 metrics gives insights we never had before. 
Moreover, that's the combination of those three (with those formulas) 
that is really accurate.

And as an opensource editor of automation tool, we had tested many 
different formulas for performance score, I have to say the ones from 
WCAG-EM are the ones I love most ! And especially the last performance 
score based on instances. This very metric allows us to leverage all the 
data an automated tool can gather. We found these metric so interesting 
that we decided to implement them as a demonstration of their usefulness 
(hope by the end of the year).

To plug to Detlev post, to me performance score is to be taken separatly 
from "progression" indicator (I mean from a methodology / priority point 
of view). The idea of giving clues to have priorities for progressing in 
one's accessibility work is different from having a cold score just here 
to measure. The "Mipaw" project, presented in Web4All 2012, had this 
objective in mind (as far as I know, there is no more work on it).

For short, my feeling is that the performance scores are a really good 
and innovative concept. It would be damageable to break them.

Sincerely,
Matthieu

On 02/11/2013 10:14, Detlev Fischer wrote:
> Hi everyone,
>
> I guess the performance score approach is more or less a straight import from UWEM - something in which some of us may have a vested interest while others have never heard of it.
>
> In the third approach, per instance, there is no provision for the weighting or flagging of instances in relation to  their actual impact on accessibility (yes, something requiring human judgement). Without that, results can be completely misleading. I therefore suggest we drop the per instance calculation in Step 5c.
>
> Having said that, I think a more fine-grained assessment of performance is very useful - it just happens that it can neither be automated nor treated in this 'blind' per instance fashion. Examples we have discussed are images without alt text (instances range from absolutely crucial to negligible), non-semantic text headings (impact would depend on place / level of hierarchy), language of parts (critical in an online dictionary, less so for foreign terms in a longer text) etc. etc. One crucial fail will often be 'drowned' in a heap of less important passes. So in my view it is just not something the WCAG EM should advise at all.
>
> Best, Detlev
>
>
>   
>
>
> Detlev Fischer
> testkreis - das Accessibility-Team von feld.wald.wiese
> c/o feld.wald.wiese
> Thedestraße 2
> 22767 Hamburg
>
> Tel   +49 (0)40 439 10 68-3
> Mobil +49 (0)1577 170 73 84
> Fax   +49 (0)40 439 10 68-5
>
> http://www.testkreis.de
> Beratung, Tests und Schulungen für barrierefreie Websites
>
>
>
>
>


-- 
Phone: +33 9 72 11 26 06
Mobile: +33 6 73 96 79 59
Twitter: @mfaure <http://twitter.com/mfaure>

Tanaguru free-libre software for accessibility assessment 
<http://www.Tanaguru.org/>
KBAccess collaborative database of good and bad examples of Web 
Accessibility <http://www.kbaccess.org/>

Received on Thursday, 7 November 2013 06:33:35 UTC