- From: Velleman, Eric <evelleman@bartimeus.nl>
- Date: Tue, 13 Sep 2011 12:46:43 +0000
- To: "public-wai-evaltf@w3.org" <public-wai-evaltf@w3.org>
Dear all, I agree with Shadi, it is the wording that we have to work on to better explain what we mean when we say no reason to hold back in the discussion. R03 (Unique interpretation) and R04 (Replicability) are not the easiest requirements for a methodology but would be good to agree on the wording before we start writing the Methodology itself. In my opinion R03 is less important than R04. R03 dates from the time that there was much more room for interpretation inside WCAG1.0. WCAG2.0 leaves less room and more and more work is being done on important knowledge about accessibility support. For a single webpage R04 is not really a problem. But for complete websites it is depending on different factors including size of the sample, scope and tolerance levels for failures etc.. We should cover this in the Methodology. And it should be measurable as Detlev proposes (by doing a testround on one or more sites). Same results In my experience it is very well possible to get the same results from different evaluators looking at the same website using a sampled set of webpages (random sample plus a targeted sample). Even if the evaluators make their own sample. I notice that small differences start to occur when there is a given tolerance for failures. The discussion then being about the impact of the failure. Also there is more need for clarity on what is 'accessibility supported' and what not (probably differs per country). Think we should also cover this in the Methodology and thus make R04 feasable. In my opinion it is not necessary to include the claim of replication and testing for it with every evaluation. In the WabCluster they tested the UWEM1.2 Methodology (for WCAG1.0) with a number of European evaluation organizations and the checklists showed the same pass and fails even though the flexibility of that Methodology was enormous. Kindest regards, Eric ========================= Eric Velleman Technisch directeur Stichting Accessibility Universiteit Twente Oudenoord 325, 3513EP Utrecht (The Netherlands); Tel: +31 (0)30 - 2398270 www.accessibility.nl / www.wabcluster.org / www.econformance.eu / www.game-accessibility.com/ www.eaccessplus.eu Lees onze disclaimer: www.accessibility.nl/algemeen/disclaimer Accessibility is Member van het W3C ========================= ________________________________________ Van: public-wai-evaltf-request@w3.org [public-wai-evaltf-request@w3.org] namens Shadi Abou-Zahra [shadi@w3.org] Verzonden: dinsdag 13 september 2011 9:50 Aan: public-wai-evaltf@w3.org Onderwerp: Re: Do we share an understanding of "requirement"? Hi all, This is a useful discussion that both Eric and I have been watching closely; I do not suggest anyone be shtum (quite). I also do not think it is a matter of dropping or keeping R03 and R04 but finding a wording that better explains what we essentially mean. It seems that there is general agreement that we want less ambiguity and a higher degree of replicability, but that there are no absolutes in this endeavor. I hope we can find a wording along these lines. Best, Shadi On 13.9.2011 09:35, Detlev Fischer wrote: > Hi everyone, > > I am getting quite concerned myself now, so please forgive me if I break > my promise to “stay shtum” to kick off a discussion about we mean when > we are using the term *requirement*. > > 1) Do we agree that we should not include requirements for > attributes which we have not shown to be *feasible*? > > 2) Do we agree that a requirement identifies a *necessary* attribute, > capability, characteristic, or quality of a system in order for > it to have value and utility to a user? > > 3) Do we further agree that requirements should be *verifiable*, i.e. > that tests can eventually prove that the thing built (our > methodology, in this case) meets the requirements we have specified? > > If we agree on these three points (and I hope we do) then R03: Unique > interpretation and R04: Replicability should be first of all feasible; > they should be shown to be necessary (e.g., the methodology would have > reduced credibility without them); finally, they should also be > verifiable (e.g. replicability and uniqueness of interpretation can be > proven in independent tests of a real-world sites). > > If you agree so far, were do we stand in this? > > *Feasible:* I have not read a single statement on this mailing list so > far that has offered any evidence that replicability and unique > (unambiguous) interpretation are feasible - especially if the > methodology stays on a fairly generic level (i.e., if it does not > prescribe the tools to be used, a step-by-step procedure, and detailed > instructions for evaluating test results). > > *Verifiable:* We do not know yet, we have not built anything so far that > we could use to carry out tests independently and then compare results. > So let’s move on to second-best, the various methods we currently use. I > would ask all of you to report on any tests that were carried out by two > independent testers and arrived at the same result. No one has come > forward and claimed it has happened, or even, that it can be done. > > *Necessary:* Some of you may believe that replicability and uniqueness > of interpretation are necessary because the methodology would be less > credible without them. But unless the methodology mandates that tests > are actually replicated, the claim of replicability is just a red > herring. I think that any claims that cannot be verified in practical > application seriously undermine the credibility of a methodology. > > Detlev > -- Shadi Abou-Zahra - http://www.w3.org/People/shadi/ Activity Lead, W3C/WAI International Program Office Evaluation and Repair Tools Working Group (ERT WG) Research and Development Working Group (RDWG)
Received on Tuesday, 13 September 2011 12:45:13 UTC