RE: What if Silver didn't have levels?

A couple of points.

There is no such thing as a defect free product.  Minimum Viable Product (MVP) has survived the test of the marketplace, and if we were to hypothetically require that products be defect free, we'd stop development and innovation (perhaps except for the largest of companies that can afford to pay for 'defect free', but even then I don't believe it would succeed).

If we use a single numeric value for a score (say 0%-100%), then we have two choices.  Either we let the score be the score, and allow the marketplace and legal systems decide what's allowable and what's not (W3C wouldn't provide any value judgement).  Or we provide a value judgement of what we call good enough, great, awesome, etc.

If we do in fact mix "score" and "value judgement", I'm not a proponent of calling the only acceptable score a perfect score.  In my opinion it's not realistic.  It will stall development and innovation.  I also believe it will not encourage adoption, and many individual developers, small businesses and maybe large businesses will "give up" (meaning they abandon attempts to conform to the standards for the project or the project will be shelved entirely).

I believe we either must abandon trying to articulate a value judgement and let the market place decide how to interpret the score (this is not my favorite, but I prefer it much more than 100% being the only acceptable value judgement), or we provide a value judgement that includes ranges of value below a perfect score.

Regards,
Chuck

-----Original Message-----
From: Alastair Campbell <acampbell@nomensa.com> 
Sent: Monday, October 21, 2019 9:51 AM
To: public-silver@w3.org
Subject: Re: What if Silver didn't have levels?

Léonie wrote:

 > I don't think that setting a level somewhere on the scale is a good idea.

If there is a score (e.g. percentage) and the minimum level is 100%, that is what we have now. You have to be perfect to pass. 

Unless you were thinking that there would be within-guideline scoring first, then an overall score? 
E.g. for alt text you score over (say) 80% and that means you 'pass' that guideline, and passing every guideline then scores 100%.

In this context I see two main advantages to a score based approach:
1. You don't have to be 'perfect' to pass (John covered that on Friday).
2. Making room for improvements above the current minimum.

We have a bunch of things that are either task-based, 'best practices', currently at AAA, or process-based that would be good for organisations to aim for. If the equivalent of WCAG 2.2 gets you to 60% (picked as a random-ish target), then you can score more for hitting the aspects that WCAG doesn't include at the moment.

NB: It could be that some of the things not included in WCAG 2.2 should be within the minimum level, I'm not trying to predict that.

It would be on Governments or other policy makers to set a higher target for themselves, or particular services. It would also allow organisations to pick particular items from what is appropriate to their industry / content. There isn't really a mechanism for that at the moment, we just have A/AA/AAA and quite a few of the AAA SCs are hard to apply to some content.


 > I think it's time we stopped talking about accessibility like it's impossible. All we're doing is validating excuses for not doing it.

There are aspects that are very difficult/impossible for organisations of a certain scale as there is a weakest-link problem. I.e. any fail is then seen as a complete fail. The current setup doesn't incentivise what you say to people in presentations: Just keep making it better, one step at a time. 

I'm in favour of a scoring method that improves the incentives for making digital products better. 

For example, both of these would be a 'fail' for a site currently:
- No headings usage at all across a content-heavy site.
- A list of 4 items on one content page using dashes instead of ul/li.

These are currently treated as the same, the site fails on 1.3.1. However, I'd be jumping up and down about the first one, and marking the other as QA issue, to be taken care of but not terrible. We should be able to make that distinction with the scoring method, and if 100% is the target I don't think that can work.


Léonie wrote:
 > development teams who've asked me (or the company I was working for) to prioritise the issues identified in an audit. If the levels system was really working, that should be apparent from the A, AA, AAA assignments, but it isn't, and that's another reason I think it's a broken model.

We also get asked that sort of question, and I agree the A/AA/AAA doesn't answer it.

The problem is that those levels apply across instances and have to average-out how 'bad' an issue is. Within any particular SC you will have instances that can be terrible and instances that are inconsequential. It is a matter of how that issue affects your task.

If the scoring is more task-based and granular then you have a way of scoring what matters more effectively.

However, if 100% is the 'baseline', that won't work, there has to be some flexibility. 

Also, in terms of migrating from WCAG 2.x, if we start that at 100% and then add lots of new guidelines that people haven't been able to apply before, that will make migration much more difficult.

Cheers,

-Alastair

Received on Monday, 21 October 2019 16:25:06 UTC