W3C home > Mailing lists > Public > public-silver@w3.org > August 2018

Re: Examples of tests in Silver

From: Audrey Maniez <amaniez@access42.net>
Date: Mon, 20 Aug 2018 14:02:00 +0200
To: Jeanne Spellman <jspellman@spellmanconsulting.com>, Silver Task Force <public-silver@w3.org>
Message-ID: <68310ad0-2657-6978-69f9-e15cc0b06425@access42.net>
Hello,

I'm new to participate, I follow the group since monthes now and 
appreciate everything that happens on that list (you link and create so 
relevant and interesting ressources).

The conformance and testing subject is on the top on my list of 
interests so I need to participate to that topic :-)

I read mails in the list and the document "examples on how silver 
conformance work" and I have some comments/suggestions/questions.

Sorry if I'm late in participating on that topic and if i'm out some 
subject you may have already treated...

*A**bout "Point System"*

In the section "Point System", we find some kind of unit tests (that 
refers to WCAG techniques I assume), like checking if lang attribute 
exists ... but*Auto WCAG CG is doing a big and great job* on that point, 
detailed tests to conduct evaluation. Why don't "wait" (or participate)  
that job to be done, and "just" refers to ? Exemple of rule created by 
Auto WCAG CG : 
https://auto-wcag.github.io/auto-wcag/rules/SC3-1-1-html-has-lang.html

*That the objective of ACT TF and AutoWCAG to harmonize testing 
methods*. Create an other testing method in the "Silver conformance" 
would lead to create new differences in evaluation. Reusing/refers to 
tests define by the AutoWcag CG allowed to gain time, expertise and we 
don't have the job to be get done twice. And that's the objective of 
Silver, to create a new harmonized framework, so I think based silver 
conformance on AutoWCAG work is something important... Maybe it could be 
incorporate in a certain way, more like a "checklist", but I think it 
definitly must depend on that AutoWCAG standard.

Then, I am not convinced with the first points system detailed (the 
example with images) : we ask for a percentage of validity based on the 
number of images : if 95% images have good alternative for example, then 
the site access the bronze level. But, what if only one image in the 
page has no description, but that image is essential for accessing the 
information.

IMHO, a scale based on something like the *"severity of block scale"* 
described in the document you linked : 
https://ebay.gitbooks.io/oatmeal/priorities.html is really much more 
appropriate. It will really evaluate the impact of errors on accessing 
the information (not only the numbers of errors, which can be relevant 
in some situation indeed).

*In France, the government has made a similar documentation 
"Accessibility failing: Impacts on Users" * 
https://disic.github.io/guide-impacts_utilisateurs/ It describes and 
evaluate by type of element (images, link, form etc...)  the impact on 
information access (a 4 points scale : low, moderate, strong, major) *If 
you are interested I can make an abstract in english **(and a complete 
translation if some of you find it relevant).*

*About the "Silver Conformance"
*

It's more a general questionning. I understand the need for sharing with 
people which are not part of the accessibility audit process. We all 
have that kinds of problems when communicate with chief for example, or 
for developpers to get an indicator of achievment etc. But maybe it 
would be less confuse if we name it a different way that "conformance". 
I mean *"conformance" is a sacred word in audit process*. Maybe 
something less formal like "evaluation", "score" or "scale" ?

Then, legally, sites must be 100% conform (not "more thant xx%") "/To 
conform to WCAG 2.0, you need to satisfy the Success Criteria, that is, 
there is no content which violates the Success Criteria./", so it has to 
be clear that "silver conformance" is a *tool*, and detail its purpose ? 
The objective of the "silver conformance" must be detailed for people 
not to think their site are "conform" if they reach a minimum number of 
points.


Many many other thought to share, I really enjoy the project :-)

Le 06/08/2018 à 19:04, Jeanne Spellman a écrit :
> Please review and comment. These are some examples I have roughly 
> outlined of how testing could work for alternative text with the 
> Conformance points and levels. It still needs a lot of discussion and 
> details.
>
> https://docs.google.com/document/d/1aBoQ1HDindVnFk_7Ljp-whpK3zAiqAdgJxsgpqsNpgU/edit?usp=sharing 
>
>
> Comments are turned on in Google docs.  However, if you would prefer 
> to comment by email, please reply to the list.  We will be discussing 
> this in the Tuesday meeting (7 August).
>
> Thanks,
>
> jeanne
>
>
-- 
Access42 		

*Audrey MANIEZ*
Experte accessibilité numérique
06 22 11 29 62

Expertise et formation en accessibilité numérique

Site web <https://access42.net/> — Twitter 
<https://twitter.com/access42net> — LinkedIn 
<https://www.linkedin.com/company/access42> — Newsletter 
<http://eepurl.com/dgHY2b>

Organisme de formation référencé dans le Datadock
Received on Monday, 20 August 2018 12:03:20 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:43 UTC