Re: Extension conflict/compatibility requirement

Actually that SC says the alternate text shall serve the same purpose as the non-test content. I know it might seem inappropriate to pop up this question here. But this SC is not as easily tested as one might imagine. It is easy to have one person check the alt text for an image and tell whether the description matches the image. But it is difficult to automate this test. I mean have it done by a computer program. There was a website we once tested in China, with most of images on its pages captions as "This is an image". If you think the semantics of images are important, this alt text shall not pass the SC.

We have encountered many similar issues in accessibility evaluation for websites in China. There are huge amount of pages in a site and human inspection is prohibitively expensive. So the evaluation must be automated. But in this sense, you find that many SCs are not easily TESTABLE. Shall be consider the easiness for automating the tests in the future?

Can Wang

发自我的 iPhone

> 在 2015年10月29日,08:41,Gregg Vanderheiden RTF <gregg@raisingthefloor.org> 写道:
> 
> Hmmm
> not quite
> 
> 
> For example and SC that says there must be alternate text (that is accessibility supported)  is very testable.    There are many ways to do it — so no one thing is deterministic.   But it is very easy to test if it is there in a manner that works with screen readers — by just using a screen reader. 
> 
> The question isnt whether one particular technique is deterministic but whether it can be test (automatically or by humans) in a reliable way.
> 
> 
> Another I that contrast must be X.    There are many techniques for ensuring this — but if it is true, it is true.
> 
> 
> So the question of whether something can be ANY KIND Of criteria — is whether you can tell when you have met it.   Success criteria are no different (if we want to use the english definition of criteria). 
> 
> To be a criteria - it must be possible to know if you have met it.   That is, it must be testable in a way that you get a reliable, repeatable, consistent result when tested by different people. 
> 
> And if the criteria is to apply to all content — then it must be possible (and reasonable) and testable for all content. 
> 
> Bringing up techniques only muddies the water.    You can pass a technique and fail the SC (if there is other content on the page using another technology for example).  You can also fail a technique and pass the SC (if you met it another way). 
> 
> Don’t look to techniques to determine if something is a success criteria.   
> Look to the criteria itself to see if it is 
> testable
> applicable to all types of content it is scoped to apply to 
> reasonable  
> (requiring all web pages to be translated into sign language is not currently reasonable or even possible - there arent enough people in the world who know sign language to convert all the pages made in a day into sign) (and if there is an automatic text to sign language ability - there is no need to make alternate sign language pages because any page can be converted on the fly)
> 
> 
> 
> Gregg
> 
> 
> 
> 
> 
> 
>> On Oct 28, 2015, at 11:15 AM, Detlev Fischer <detlev.fischer@testkreis.de> wrote:
>> 
>>> Am 28.10.2015 um 16:17 schrieb Gregg Vanderheiden <gregg@raisingthefloor..org>:
>>> And having testable techniques does not make up for a non-testable SC.   You need to be able to determine if the SC is met - not if a technique use for some content on the page passes.
>> 
>> The thing is that there is no single test to determine if a SC is met, nor a finite set of tests (because techniques are not required, and new techniques to account for may emerge at any time - so in my view, this implies that conformance to a SC can never be established In a deterministic, fully replicable way (because this would require a fully operationalized, completely documented test procedure that can be exactly followed by anyone).
>> 
>> I hope this does not come across as trolling. I think it is important to set realistic expectations regarding the outcome of a11y testing of complex content, and to realize that a conformance check is often not completely objective. It includes common sense judgments that take on board both quality (attributing "not ideal" content instances to either "pass" or "fail", and assessing the a11y impact of issues found) and quantity (number of issues on a particular page).
>> 
>> Sent from phone
>> 
>> And having testable techniques does not make up for a non-testable SC.   You need to be able to determine if the SC is met - not if a technique use for some content on the page passes. 
> 

Received on Thursday, 29 October 2015 01:07:17 UTC