- From: Makoto Ueki <makoto.ueki@gmail.com>
- Date: Tue, 16 Apr 2019 22:18:59 +0900
- To: Jeanne Spellman <jspellman@spellmanconsulting.com>
- Cc: Silver Task Force <public-silver@w3.org>
- Message-ID: <CAF9hGuaz9ufXxHX7YO-jxjrrzaPJ6DSrR7Ut7bTeaeSWz6Oq1w@mail.gmail.com>
Hi all, Just to share what we're doing in Japan. Japan has the national standard called "JIS X 8341-3” and the latest version is "JIS X 8341-3:2016”. The JIS standard is identical with "ISO/IEC 40500:2012". It means that the normative section of "JIS X 8341-3:2016” is the Japanese translation of "WCAG 2.0", while the JIS standard has some informative sections as the appendixes including "Testing method". WAIC (Web Accessibility Infrastructure Committee) was the organization which developed the draft of "JIS X 8341-3:2016". I was the chair of the working group. Plus, WAIC developed some additional guidelines for the testing methods and the conformance claim. We needed the rule for testing a large website as WCAG 2.0 defines the conformance claim for only web page (URL), not for entire website. We set the rule which require web content owners (ex. web masters) to select 40 web pages from the website and test the 40 web pages. They must select more than 25 web pages by using random selection. And if there isn't any issues within the 40 pages, they can make their conformance claim for the entire website. This document describes the details but is written in Japanese. You maybe able to translate it into your language by using automate translation feature of Google Chrome or something. https://waic.jp/docs/jis2016/test-guidelines/201604/ We came up with the idea of "40 web pages" originally from "Unified Web Evaluation Methodology(UWEM)1.2" developed in EU around year 2007 - 2008. And we set to "40" In terms of man-hours and costs. http://www.wabcluster.org/uwem1_2/ Also we encourage a small website which has less than 100 web pages to test every single web page within the website. Cheers, Makoto 2019年4月12日(金) 22:26 Jeanne Spellman <jspellman@spellmanconsulting.com>: > I'm glad you brought this up. I think we need to look at all the use > cases where organizations make a good-faith effort to make their site > accessible and it still has problems. If we have a list of use cases, > we can address them. > > "Substantially conforms" came out of the Silver research where companies > had a generally accessible site, but it was so large or updated so > quickly that it wasn't possible to guarantee that it was 100% > conformant. Facebook was an example of a site that was literally > impossible to test because it was updated tens of thousands of times per > second. > > "Tolerance" is a different concept of a less-than-ideal implementation > but no serious barriers. I think we could collect those "less than > ideal" examples when we write the tests for the user need. How we would > we flag them as "less than ideal" and refer people to better methods > seems like a solvable problem. > > "Accessibility Supported" is another slice of this problem, where > organizations code to the standard, but it doesn't work because of some > bug or lack of implementation in the assistive technology. We have > discussed noting the problem in the Method, and then tagging the Method > for the assistive technology vendors to know they have a problem, or > make it easy for SME's to file bugs against the AT (or user agents, or > platforms, etc.) > > Are there other use cases we should consider? > > On 4/10/2019 1:47 AM, Detlev Fischer wrote: > > I think “substantially conforms” would be good to have to reflect > implementation reality and reward those who work hard to get their stuff > accessible but have to live with some issues they cannot fully bring in > line, so thumbs up for this one. It is the inverse of the concept of > “tolerances” which has been around for some time. > > > > For most SCs one can describe situations where implementation is less > then perfect but no serious issues exist. Would it be too arbitrary to > collect a compendium of such cases per SC as a kind of example-based > benchmark (which might be regularly updated to reflect new techniques)? The > problem of course in documenting such slack is that it might invite > implementors to do things that they shouldn’t. Might still be helpful to > build consensus in WG around assessments of ‘tolerance’ or ‘substantially > conforms’. > > Detlev > > > > Sent from phone > > > >> Am 09.04.2019 um 17:16 schrieb Jeanne Spellman < > jspellman@spellmanconsulting.com>: > >> > >> We want to do "substantially conforms" (partial conformance is a > different concept and we want to keep them separate). > > > >
Received on Tuesday, 16 April 2019 13:21:02 UTC