W3C home > Mailing lists > Public > public-silver@w3.org > August 2018

Re: Examples of tests in Silver

From: Alastair Campbell <acampbell@nomensa.com>
Date: Mon, 20 Aug 2018 15:09:00 +0000
To: Jeanne Spellman <jspellman@spellmanconsulting.com>, Silver Task Force <public-silver@w3.org>
Message-ID: <0E7DA534-2C35-4F2A-8D40-43983AC255E8@nomensa.com>
Hi Jeanne & Silver TF,

Somewhat co-incidentally I've been going through ISO 9241-171 (software accessibility) which allows for:
- not-applicable 
- yes (fulfils recommendation)
- partially
- no

Also (interestingly) many of the recommendations are quite usability based, e.g.
"Provide understandable user notifications", talking about them being short, simple and written in clear language.

It helps that this ISO can normatively reference all the related UCD standards that it is part of. On the other hand, perhaps WCAG has been a more popular standard because it is self-contained and clearer to test?

There are a lot of different limitations we could look at for criteria, has anyone tried to list them all? 

E.g. Is the criteria:
- applicable to this interface? (e.g. captions not applicable if there is no multimedia).
- critical to the user's task? (e.g. alt text on graphical submit button)
- difficult for the type of site or content? (e.g. alt text on a social media site)
- a blocker for some users, or could they work around it? (e.g. heading structure doesn't block use, but reduced understanding)
- difficult to implement? 
- difficult for an independent evaluator to assess?
- Dependant on the user's technology?

I'm not convinced we should differentiate types of site (e.g. ecommerce), that's very tricky to do in practice. E.g. Google Play store is part ecommerce, part brochure, part music player... Can we scope it to the type of content / activity that makes that difficult or different instead?

I like the examples Audrey sent today, the Ebay one ranks the issues purely by user-impact.

With a different approach to showing the criteria for different technologies, perhaps Silver could rank/prioritise everything by user-impact and then it becomes more apparent which technologies are not as good for meeting accessibility goals?

Some criteria (e.g. alt text, plain language) might suit a points score, but I think that should then feeds into a general pass / pass with minor issues / fail. Also, any instance of a fail that is critical to people's tasks should not be allowed a "pass with minor issues".

Overall, it seems like each criteria / recommendation / thing needs to define:
1. Applicability
2. Impact on user
3. Method of testing (that could be scoring)
4. Conformance result.

Given the variety of criteria we'll need, separating the test-method from conformance-result will be necessary to have understandable results.

Kind regards,


´╗┐On 06/08/2018, 18:08, "Jeanne Spellman" <jspellman@spellmanconsulting.com> wrote:

    Please review and comment. These are some examples I have roughly 
    outlined of how testing could work for alternative text with the 
    Conformance points and levels. It still needs a lot of discussion and 

    Comments are turned on in Google docs.  However, if you would prefer to 
    comment by email, please reply to the list.  We will be discussing this 
    in the Tuesday meeting (7 August).

Received on Monday, 20 August 2018 15:09:25 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:43 UTC