- From: Shadi Abou-Zahra <shadi@w3.org>
- Date: Thu, 22 May 2014 10:09:29 +0200
- To: Alistair Garrison <alistair.j.garrison@gmail.com>
- CC: Eval TF <public-wai-evaltf@w3.org>
Hi Alistair, On 21.5.2014 20:30, Alistair Garrison wrote: > Hi Shadi, > > Historically, I have been directly involved in 3 different groups all looking to create a methodology for WCAG. > > With regard to WCAG 1.0 the open interpretation of checkpoints led to lots of different things being assess by lots of different groups - which fragmented what people said was accessible. > > WCAG 2.0 was designed to be more testable, intrinsically I believe through the use of sufficient techniques / failure conditions (is this what you think is fundamentally incorrect?). If as you say we should be checking what we think we should each check under each success criteria, rather than being guided by the techniques (and associated checks) which developers implement - don't we run a risk of being back at square WCAG 1.0… One of the main improvements in WCAG 2.0 is the formulation of Success Criteria to be testable statements (among other important improvements such as being more technology-agnostic, device independent, etc.). For example, WCAG 1.0 required that "color combinations provide sufficient contrast" without defining what "sufficient" is. WCAG 2.0 provides an algorithm so that the success criteria can be either met or not met. The techniques and failures are incredibly useful and also important resources. Techniques document *some* of the known ways for meeting success criteria and failures document *some* of the common situations in which content does not meet success criteria. They are by design non-exhaustive and non-exclusive. That is, there is no requirement to use documented techniques. The requirement is to meet the criteria. > Next, from a recent proposal for a directive of the European Parliament and of the Council on the accessibility of public sector bodies' websites - I note that states will be responsible for - "Designating a competent authority in each Member State as the enforcement body would be an adequate way to ensure that the conformity with web accessibility requirements is monitored and rigorously enforced"… I note "competent authority" and "conformity with web accessibility requirements". The techniques and failures are not requirements but (non-normative) helpers. EN 301 549 does not include techniques. > My worry is that by not including a clause as I proposed developers will ultimately be constrained into developing web content which ticks the accessibility boxes of the monitors (who will most definitely rely on the use a limited set of tests in a tool to evaluate [e.g. Automated WCAG Monitoring Community Group], as they will not have time to assess websites on a page by page basis). If the monitors are not checking "conformity with web accessibility requirements" then they are not addressing the proposed directive. Also, I don't really see it happening that developers will publish statements that associate techniques with every piece of content for entire websites. It would also limit developers from using the latest web technologies, such as HTML5, that have significant accessibility improvements because they do not yet have documented techniques. Best, Shadi > Very best regards > > Alistair > > > On 21 May 2014, at 18:34, Shadi Abou-Zahra wrote: > >> Hi Alistair, >> >> A fundamental problem in your scenario is that you are checking for techniques rather than success criteria, which is the actual WCAG 2.0 requirement. There is no WCAG 2.0 requirement for "scope" or even "table headers" for that matter. The requirement is "Information, structure, and relationships conveyed through presentation can be programmatically determined or are available in text" (SC 1.3.1). >> >> An evaluator needs to have the expertise to assess that. We tried to reflect that in section "Required Expertise": >> - http://www.w3.org/TR/WCAG-EM/#expertise >> >> So, detailed evaluation statements like the one you describe should be extremely helpful for evaluators, even if just to show them how the developers worked. However, evaluators are responsible for their own determinations. So, unless they have some way of identifying how reliable the evaluation statement is, they would need to be cautious. >> >> I think this could be a helpful not directly under step 2 - something along the lines of "use whatever information you can get about the website, and evaluation statements can be gold mines where available". >> >> Best, >> Shadi >> >> >> On 19.5.2014 12:14, Alistair Garrison wrote: >>> Dear All, >>> >>> Let's say I'm a website owner and I follow specific techniques in order to create a website which, as far as I know, is compliant. I make a claim to this effect, after evaluating the website and creating an evaluation statement (which details which checks I have done, from the techniques I have followed). >>> >>> Now let's say a national monitoring project checks my website, but their conformance model is based on a different set of techniques, and of course checks. >>> >>> After their checks the monitoring project say that according to their evaluation my website fails - maybe one or two things. Really only due to the mismatch between techniques e.g. I used the headers attribute technique to make tables compliant, whilst they only checked for scope, etc… >>> >>> Would I have to change my website? This seems a little silly… >>> >>> My question is then - what happens when a proper evaluation statement already exists? >>> >>> I think we have spoken around this subject a number of times, but there is no clear advice that I can find in the methodology. >>> >>> My hope would be that if the evaluation statement has been made properly and is up-to-date, any further evaluations undertaken (with or without the knowledge of the website owner) should in some way respect the techniques followed and the way in which the page has been evaluated - especially if the further evaluations are to be used for monitoring, etc… >>> >>> Otherwise, I worry that the techniques and their checks selected in outside evaluations, especially monitoring evaluations, will start to constrain the techniques which developers can follow - which is at odds with WCAG 2.0. >>> >>> I would suggest that the most likely area for discussion on this subject is step 1.d. >>> >>> Anyway, interested to hear your thoughts and comments. >>> >>> All the best >>> >>> Alistair >>> >>> >>> >>> >> >> -- >> Shadi Abou-Zahra - http://www.w3.org/People/shadi/ >> Activity Lead, W3C/WAI International Program Office >> Evaluation and Repair Tools Working Group (ERT WG) >> Research and Development Working Group (RDWG) > > > -- Shadi Abou-Zahra - http://www.w3.org/People/shadi/ Activity Lead, W3C/WAI International Program Office Evaluation and Repair Tools Working Group (ERT WG) Research and Development Working Group (RDWG)
Received on Thursday, 22 May 2014 08:10:00 UTC