Re: What if Silver didn't have levels?

As I said before, I like the Léonie’s idea of starting with 100% and then subtracting percentages for ‚less than perfect* stuff. I am not sure whether we need to set some MVP level. If we end up doing that, I just note that several approaches (not just the German) have looked at something around 90 of 100 (percent, or points). 

> Am 18.10.2019 um 21:52 schrieb John Foliot <john.foliot@deque.com>:
> 
> What if, instead of missing captions, it was a site missing multiple textual alternatives? What does "missing text alternatives" count for? 1%? 2%? 5%? More?  Is that score based upon each image on a page, a cumulative score of all images on the page, a grand total of all images on all pages, or a representative sampling? Text heavy (or text-only) sites will likely get a positive bump there, whereas image-rich pages/sites may take a biased hit if we base it simply on number of images on a page or site alone. Yet, depending on each actual image, the severity and impact on the end-user of a missing text alternative could conceivably range from merely annoying to out-right dangerous. 


We used to make that judgment on a page-based level (states like „property viewer opened“ may be defined as separate pages) and rate each per SC on a five point Likert scale of „pass“, „near pass“, „partly pass“, "near fail“, „fail“ (and non applicable, of course). This judgement reflected both quantitative AND qualitative aspects. Going for full Pass/Fail per SC (when we aligned with the current WCAG conformance approach) did not take the human judgement problem away, it just forced the focus on the Pass/Fail flip-over point (In essence, it meant agonizing about whether or not something could still be called „near pass“, i.e., be within tolerances.) 

I believe the agonizing problem of judging tolerances will not go away whatever the scheme we end up with.

Received on Saturday, 19 October 2019 04:31:51 UTC