Re: Costs of testing with Silver

Alastair wrote:

I’m suggesting the structure starts with user-requirements, with general &
per-technology criteria underneath that. Each criteria could be assigned a
level, and certain criteria may not be technical requirements (i.e. not
like the binary content requirements we have now).


Sure, but with that type of granularity, it will actually (probably?) add
to the cost of testing, not reduce it, as now you will have different
test-runs based on content and platforms. I'm not opposed to that in
principle, but it does toss a spanner into the cost discussion...


My larger concern is that we seem to be talking about a Good, Better, Best
scenario - but the flip side to that is, by my experience and observation,
"good enough" is rarely either - it's neither good nor enough.

I do suspect however that what you are talking about is more like the
current WCAG at the Principles and Guidelines level, and not the SC level.
In that aspect, yes, I could envision individual SC continuing to be
targeted to content or platform. My overarching concern however is that we
don't start to perpetuate the idea that "70% Accessible is Good Enough(TM)",
or that somehow "accessible" is measured as an aggregate score of how many
check-boxes you've ticked off - we have far too many examples of that
failing in the wild today.


(Image: 70% accessible ramp - it goes to roughly 4 steps before the top of
the stair-case)


(Image: Check-box accessibility - Braille signage/labels on buttons in an
elevator. The sign in braille reads "Door locked when lit")


(Image: Check-box accessibility, part 2 - Braille labels behind glass in a
vending machine)

...and yes, I have more... :)

JF


On Thu, Aug 30, 2018 at 4:44 PM, Alastair Campbell <acampbell@nomensa.com>
wrote:

> Hi John,
>
>
>
> > merging of those ideas is counter-productive (or at least confusing), as
> Wilco started this thread specifically about the CO$T of testing, and not
> of feasibility or testability.
>
>
>
> There is overlap, in the non-text contrast SC the issue (that didn’t win
> the argument) was that it would take too long to test all the images, it
> wasn’t automatable. “Testability” has included whether it is possible to
> test in a reasonable time.
>
>
>
> An example for implementation was accessible authentication, there were
> several methods [1] to avoid having to rely on passwords. However, there
> wasn’t a browser-native method so any organisation implementing it would
> have to do so independently, and currently that is relatively expensive.
>
> (Since then webauth became a recommendation, we should revisit that.)
>
>
>
>
>
> Re plain language:
>
> > my concern then was around testability and scalability, which we could
> not get agreement on.
>
>
>
> How is that different from cost? With infinite time and the creation of
> common dictionaries per topical domain you could test it, but I agreed that
> it was not practical in a WCAG 2.x context.
>
>
>
>
>
> > we are here to create the right technical requirements and guidance to
> benefit people with disabilities.
>
>
>
> Agreed, and we aren’t disagreeing very much. My point is we won’t *have*
> technical requirements to meet some of the user-requirements, so whatever
> conformance model we come up with will need to account for that.
>
>
>
> I’m suggesting the structure starts with user-requirements, with general &
> per-technology criteria underneath that. Each criteria could be assigned a
> level, and certain criteria may not be technical requirements (i.e. not
> like the binary content requirements we have now).
>
>
>
> Cheers,
>
>
>
> -Alastair
>
>
>
> 1] https://lists.w3.org/Archives/Public/w3c-wai-gl/2017OctDec/1037.html
>



-- 
*John Foliot* | Principal Accessibility Strategist

Deque Systems - Accessibility for Good

deque.com

Received on Thursday, 30 August 2018 22:54:07 UTC