Re: Discussion 5.5

Hi Detlev, 

At this stage I was talking only about an approach to finding relevant content - trying to define a replicable evaluation methodology is the big picture into which it sits (and I hope our goal).

One approach for finding relevant content (as has been mentioned) might be to simply select a sample of top pages (lets say 20 - to include home, etc...) and then use an automated approach for finding other pages which contain content relevant to each criteria being evaluated.  Just an example, but for a website this type of methodical approach would at least lead to a level of consistency - and a reduced margin of error when we say content of the type x does not exist, therefore certain criteria are not applicable.

Hope this clarifies things for you.

All the best 

Alistair

On 19 Jan 2012, at 17:48, Detlev Fischer wrote:

> Am 19.01.2012 16:06, schrieb Alistair Garrison:
>> Hi Eric, Eval TF,
>> 
>> If we define a methodical approach for finding relevant content -
>> two people using this approach on the same set of pages should
>> discover the same errors.>
> 
> Hi Alistair,
> 
> I think that kind of replicability can only be achieved if there is a detailed descripiton in the test procedure of what to check and in what context. This would need to include the setting of a benchmark test suite (hardware, browser(s) and version(s) used) - even, for some checks, viewport size - and checks would only be replicable (ideally) if another tester uses the same suite.
> If we abstain from that, fine, but I can't see how one might discover the same errors without being specific. Example: Text zoom may work fine in a certain browser and viewpoint size, and lead to overlaps in another setting where all possible techniques for text resize may fail SC 1.4.4.
> 
> Or how would you go about achieving replicability? I am not sure I understand your approach.
> 
> Regards,
> Detlev
> 
>> If after using this methodical approach no more relevant content can be found, and there are no errors in the relevant content - what is under test must have passed, leading an evaluator to say if something conforms.
>> 
>> However, there is still uncertainty about any further undiscovered content - that doubt stems from how effective our methodical approach was in the first place, and how fool proof it was to implement.  Ensuring it is the best it can be is our responsibility.  I suppose an error margin might be expressed for our methodical approach - we could say using this approach should find 95% of all content - should be 99%...
>> 
>> However, an evaluator would still need to have some sort of disclaimer.
>> 
>> Thoughts to inject into the telecon.
>> 
>> Alistair
>> 
>> On 19 Jan 2012, at 15:24, Velleman, Eric wrote:
>> 
>>> This could mean that it is practicly impossible to reach full conformance with WCAG2.0... A good evaluator can always find an error somewhere is my experience. Whe may have to accept that people make errors. Everything has an error margin. Even safety requirements have an error margin... Even the chip industry, LCD panels have error margins..
>>> Kindest regards,
>>> 
>>> Eric
>>> 
>>> 
>>> 
>>> ________________________________________
>>> Van: Alistair Garrison [alistair.j.garrison@gmail.com]
>>> Verzonden: donderdag 19 januari 2012 14:19
>>> Aan: Velleman, Eric; Eval TF
>>> Onderwerp: Re: Discussion 5.5
>>> 
>>> Dear Eric, Eval TF,
>>> 
>>> I vote not to allow error margins - for the reason I outlined in my previous email on this subject.
>>> 
>>> Instead, I would continue to support a simple disclaimer such as "The evaluator has tried their hardest to minimise the margin for error by actively looking for all content relevant to each technique being assessed which might have caused a fail."
>>> 
>>> Occam's razor - simplest is best...
>>> 
>>> Alistair
>>> 
>>> On 19 Jan 2012, at 13:58, Velleman, Eric wrote:
>>> 
>>>> Dear all,
>>>> 
>>>> For the Telco today:
>>>> We have seen a lot of discussion on 5.5 Error Margin. As indicated in the discussion, it also depends on other things like the size of the sample, the complexity of the website and the qualities of the evaluator, use of tools (for collecting pages, making a first check) etc. etc. But we need to be agree on:
>>>> 
>>>> Do we allow errors or not?
>>>> 
>>>> If not, life is easy
>>>> If yes, we need to describe under what conditions
>>>> 
>>>> Kindest regards,
>>>> 
>>>> Eric
>>>> 
>>>> =========================
>>>> Eric Velleman
>>>> Technisch directeur
>>>> Stichting Accessibility
>>>> Universiteit Twente
>>>> 
>>>> Oudenoord 325,
>>>> 3513EP Utrecht (The Netherlands);
>>>> Tel: +31 (0)30 - 2398270
>>>> www.accessibility.nl / www.wabcluster.org / www.econformance.eu /
>>>> www.game-accessibility.com/ www.eaccessplus.eu
>>>> 
>>>> Lees onze disclaimer: www.accessibility.nl/algemeen/disclaimer
>>>> Accessibility is Member van het W3C
>>>> =========================
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

Received on Thursday, 19 January 2012 17:17:38 UTC