RE: Perfomance score per instance

Hi Matthieu,

 

I have interest in your tables even if they are in French. (J’aime beaucoup
les français).

 

Best,

Emmanuelle

 

Emmanuelle Gutiérrez y Restrepo

Patrono y Directora General

Fundación Sidar - Acceso Universal

Email:  <mailto:coordina@sidar.org> coordina@sidar.org

Personal:  <mailto:Emmanuelle@sidar.org> Emmanuelle@sidar.org

Web:  <http://sidar.org> http://sidar.org

 

De: Matthieu Faure [mailto:ml@Open-S.com] 
Enviado el: jueves, 07 de noviembre de 2013 10:59
Para: public-wai-evaltf@w3.org
Asunto: Re: Perfomance score per instance

 

This is not yet implemented, so I don't have user feedback by now.

To me, the performance scores given by a tool do not reflect the
"accessibility level" (this way we avoid any troll :) <smile>).

But the user feedback I actually have is: "once I've got all my results
(over my N thousands pages), could the tool give me some insights about how
to be efficient in fixing accessibility ?"

To be precise, the scope I consider is only what a given tool can automate.
I know this is not the whole accessibility work to do, but I think this is a
good starting point (and clients like it because they are autonomous, and
don't have to "call the expert" for such tasks as fixing titles or
headings).

Once we consider a reduced scope, providing the user with the three
performance score gives him an instant overview of where he is and where to
go quick. For instance I can easily see if by correcting one point I can fix
many instances of a given problem. This way I am more efficient.

If you want, I created a text with tables and numeric examples to
demonstrate the usefulness of the scores to my french colleagues, but as a
drawback it is still in french and 3-4 pages long.

Cheers,
Matthieu



On 07/11/2013 10:37, Shadi Abou-Zahra wrote:

Thank you for your input Matthieu, it is an open discussion. 

The issue is really what does the "per instance" score really mean? In the
example you provided you seem to only count the instances that the tool can
identify. In best case these are the automatable ones. Seeing this as a
performance score could be yet more misleading. For example, a performance
score of "90%" could be very far away from any reality given that it
actually only reflects the automatable instances that a particular tool was
able to identify. While there are researches that indicate a potential
correlation between such scores and actual level of accessibility (which is
why we proposed this score at all), to my knowledge we do not have
definitive evidence at this point in time. 

Besides the fact that this score leverages the data from an automated tool,
how do you see the score being used in practice? Do the users of your tool
implementation really understand what the score represents? 

Best, 
  Shadi 


On 7.11.2013 07:33, Matthieu Faure wrote: 



Hello, 

(I'm  part of the french team lead by Sylvie Duchateau from Braillenet, 
who gave feedback this spring/summer. I'm not used to W3C habits, so if 
this email is inappropriate, don't hesitate to tell me.) 

I think the 3 performance scores are a tremendous concept. The 
combination of those 3 metrics gives insights we never had before. 
Moreover, that's the combination of those three (with those formulas) 
that is really accurate. 

And as an opensource editor of automation tool, we had tested many 
different formulas for performance score, I have to say the ones from 
WCAG-EM are the ones I love most ! And especially the last performance 
score based on instances. This very metric allows us to leverage all the 
data an automated tool can gather. We found these metric so interesting 
that we decided to implement them as a demonstration of their usefulness 
(hope by the end of the year). 

To plug to Detlev post, to me performance score is to be taken separatly 
from "progression" indicator (I mean from a methodology / priority point 
of view). The idea of giving clues to have priorities for progressing in 
one's accessibility work is different from having a cold score just here 
to measure. The "Mipaw" project, presented in Web4All 2012, had this 
objective in mind (as far as I know, there is no more work on it). 

For short, my feeling is that the performance scores are a really good 
and innovative concept. It would be damageable to break them. 

Sincerely, 
Matthieu 

On 02/11/2013 10:14, Detlev Fischer wrote: 



Hi everyone, 

I guess the performance score approach is more or less a straight 
import from UWEM - something in which some of us may have a vested 
interest while others have never heard of it. 

In the third approach, per instance, there is no provision for the 
weighting or flagging of instances in relation to  their actual impact 
on accessibility (yes, something requiring human judgement). Without 
that, results can be completely misleading. I therefore suggest we 
drop the per instance calculation in Step 5c. 

Having said that, I think a more fine-grained assessment of 
performance is very useful - it just happens that it can neither be 
automated nor treated in this 'blind' per instance fashion. Examples 
we have discussed are images without alt text (instances range from 
absolutely crucial to negligible), non-semantic text headings (impact 
would depend on place / level of hierarchy), language of parts 
(critical in an online dictionary, less so for foreign terms in a 
longer text) etc. etc. One crucial fail will often be 'drowned' in a 
heap of less important passes. So in my view it is just not something 
the WCAG EM should advise at all. 

Best, Detlev 




Detlev Fischer 
testkreis - das Accessibility-Team von feld.wald.wiese 
c/o feld.wald.wiese 
Thedestraße 2 
22767 Hamburg 

Tel   +49 (0)40 439 10 68-3 
Mobil +49 (0)1577 170 73 84 
Fax   +49 (0)40 439 10 68-5 

http://www.testkreis.de 
Beratung, Tests und Schulungen für barrierefreie Websites 






 

 

 

-- 
Phone: +33 9 72 11 26 06
Mobile: +33 6 73 96 79 59
Twitter: @mfaure <http://twitter.com/mfaure> 

Tanaguru free-libre software for accessibility assessment
<http://www.Tanaguru.org/>  
KBAccess collaborative database of good and bad examples of Web
Accessibility <http://www.kbaccess.org/>  

Received on Thursday, 7 November 2013 12:25:57 UTC