W3C home > Mailing lists > Public > public-silver@w3.org > October 2019

Re: What if Silver didn't have levels?

From: Alastair Campbell <acampbell@nomensa.com>
Date: Mon, 21 Oct 2019 15:51:11 +0000
To: "public-silver@w3.org" <public-silver@w3.org>
Message-ID: <C755E4EF-D2FA-41C7-8A1C-D264212C3843@nomensa.com>
Léonie wrote:

	> I don't think that setting a level somewhere on the scale is a good idea.

If there is a score (e.g. percentage) and the minimum level is 100%, that is what we have now. You have to be perfect to pass. 

Unless you were thinking that there would be within-guideline scoring first, then an overall score? 
E.g. for alt text you score over (say) 80% and that means you 'pass' that guideline, and passing every guideline then scores 100%.

In this context I see two main advantages to a score based approach:
1. You don't have to be 'perfect' to pass (John covered that on Friday).
2. Making room for improvements above the current minimum.

We have a bunch of things that are either task-based, 'best practices', currently at AAA, or process-based that would be good for organisations to aim for. If the equivalent of WCAG 2.2 gets you to 60% (picked as a random-ish target), then you can score more for hitting the aspects that WCAG doesn't include at the moment.

NB: It could be that some of the things not included in WCAG 2.2 should be within the minimum level, I'm not trying to predict that.

It would be on Governments or other policy makers to set a higher target for themselves, or particular services. It would also allow organisations to pick particular items from what is appropriate to their industry / content. There isn't really a mechanism for that at the moment, we just have A/AA/AAA and quite a few of the AAA SCs are hard to apply to some content.

	> I think it's time we stopped talking about accessibility like it's impossible. All we're doing is validating excuses for not doing it.

There are aspects that are very difficult/impossible for organisations of a certain scale as there is a weakest-link problem. I.e. any fail is then seen as a complete fail. The current setup doesn't incentivise what you say to people in presentations: Just keep making it better, one step at a time. 

I'm in favour of a scoring method that improves the incentives for making digital products better. 

For example, both of these would be a 'fail' for a site currently:
- No headings usage at all across a content-heavy site.
- A list of 4 items on one content page using dashes instead of ul/li.

These are currently treated as the same, the site fails on 1.3.1. However, I'd be jumping up and down about the first one, and marking the other as QA issue, to be taken care of but not terrible. We should be able to make that distinction with the scoring method, and if 100% is the target I don't think that can work.

Léonie wrote:
	> development teams who've asked me (or the company I was working for) to prioritise the issues identified in an audit. If the levels system was really working, that should be apparent from the A, AA, AAA assignments, but it isn't, and that's another reason I think it's a broken model.

We also get asked that sort of question, and I agree the A/AA/AAA doesn't answer it.

The problem is that those levels apply across instances and have to average-out how 'bad' an issue is. Within any particular SC you will have instances that can be terrible and instances that are inconsequential. It is a matter of how that issue affects your task.

If the scoring is more task-based and granular then you have a way of scoring what matters more effectively.

However, if 100% is the 'baseline', that won't work, there has to be some flexibility. 

Also, in terms of migrating from WCAG 2.x, if we start that at 100% and then add lots of new guidelines that people haven't been able to apply before, that will make migration much more difficult.



Received on Monday, 21 October 2019 15:51:22 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:31:46 UTC