RE: Requirements draft - objectivity

Hi all,
Objectivity seems like a very difficult concept to define. Is it not already covered by: R04 Reliable and replicable?

The Methodology should also cover the question: is one missing or failed ALT attribute enough to fail the webpage or a website? Or a not totally correct heading structure on one webpage? I think this is covered in R14 Tolerance Metrics but am not sure if you mean something else.

We definitely also need a process for testing the Methodology and I agree that a test design/controlled test would be a good way to do that. In the Wabcluster we let different organizations evaluate three websites (small, medium and very large website). We did not make a test design but used commonly available websites and synchronized the work (in time and content). Would that be a good approach?

Kindest regards and hope to speak to you all this afternoon,
Keep discussing!

Eric

________________________________________
Van: public-wai-evaltf-request@w3.org [public-wai-evaltf-request@w3.org] namens Kerstin Probiesch [k.probiesch@googlemail.com]
Verzonden: donderdag 15 september 2011 10:24
Aan: Detlev Fischer
CC: public-wai-evaltf@w3.org
Onderwerp: Re: Requirements draft - objectivity

Hi Derlev, all,

because one can not be sure about 100 percent objectivity a Test Design should be a controlled test design. In our case - we haven't decided about the Approach - this can happen for example over the amount of pages or the amount of pages per SC. Also with other Deskriptions for Testing Procedures.

Best

Kerstin

Via Mobile

Am 15.09.2011 um 07:39 schrieb Detlev Fischer <fischer@dias.de>:

> Quoting Kerstin Probiesch <k.probiesch@googlemail.com>:
>
>> Central question:
>>
>> Do we want that a tester can manipulate the results?
>
> DF: of course not, but this cannot be ensured by objectivity (whatever that would mean in practice) but only by some measure of quality control: a second tester or independent verification of results (also, verification of the adequacy of the page sample)
>>
>> I don't mean the case that something was overlooked but the case that something was willingly overlooked. Or the other Way round.
>
> DF: Well, if someone wants to distort results there will probably always ways to do that, I would not start from that assumption. Is one imperfect or missing alt attributes TRUE or FALSE for SC 1.1.1 applied to the entire page? What about a less than perfect heading structure? etc, etc. There is, "objectively", always leeway, room for interpretation, and I think we unfortunately DO need agreement with reference to cases / examples that set out a model for how they should be rated.
>>
>> If not we need Objectivity as a Requirement. Just Agreement on something is not enough.
>
> DF: Can you explain what in your view the requirement of "objectivity" should entail *in practice*, as part of the test procedure the methodology defines?
>
>>
>> And again: No Objectivity - no standardized methodology.
>>
>> Kerstin
>>
>>
>>
>>
>>
>> Via Mobile
>>
>> Am 14.09.2011 um 12:09 schrieb Detlev Fischer <fischer@dias.de>:
>>
>>> DF: Just one point on objective, objectivity:
>>> This is not an easy concept - it relies on a proof protocol. For example, you would *map* a page instance tested to a documented inventory of model cases to establish how you should rate it against a particular SC. Often this is easy, but there are many "not ideal" cases to be dealt with.
>>> So "objective" sounds nice but it does not remove the problem that there will be cases that do not fit the protocol, at which point a human (or group, community) will have to make an informed mapping decision or extend the protocol to include the new instance. I think "agreed interpretation" hits it nicely because there is the community element in it which is quite central to WCAG 2.0 (think of defining accessibility support)
>>>
>>> Regards,
>>> Detlev
>>>
>>>>
>>>> Comment (KP): I understand the Denis' arguments. The more I think about
>>>> this: neither "unique interpretation" nor "agreed interpretation" work very
>>>> well. I would like to suggest "Objective". Because of the following reason:
>>>> It would be one of Criteria for the quality of tests and includes Execution
>>>> objectivity, Analysis objectivity and Interpretation objectivity. If we will
>>>> have in some cases 100% percent fine, if not we can discuss the "tolerance".
>>>> I would suggest:
>>>>
>>>> (VC)  I'm still contemplating this one.  I can see both arguments as plausible.
>>>> I'm okay with 'objectivity' but think it needs more explanation i.e. who defines
>>>> how objective it is?
>>>>
>>>
>>
>>
>
>
>
> --
> ---------------------------------------------------------------
> Detlev Fischer PhD
> DIAS GmbH - Daten, Informationssysteme und Analysen im Sozialen
> Geschäftsführung: Thomas Lilienthal, Michael Zapp
>
> Telefon: +49-40-43 18 75-25
> Mobile: +49-157 7-170 73 84
> Fax: +49-40-43 18 75-19
> E-Mail: fischer@dias.de
>
> Anschrift: Schulterblatt 36, D-20357 Hamburg
> Amtsgericht Hamburg HRB 58 167
> Geschäftsführer: Thomas Lilienthal, Michael Zapp
> ---------------------------------------------------------------
>



Received on Thursday, 15 September 2011 10:13:59 UTC