This very long message addresses responses to my proposal for GL 3.1 [1] by a number of people, including Lisa Seaman [2], Professor Watanabe [3], Joe Clark [4], [5], Jason White [7], Becky Gibson, Wendy Chisholm, Gregg, and Roberto Scano. Some of their responses were sent to the list; others came via last week’s Straw Poll. I may have missed a few—if I haven’t responded to your concerns, please let me know and I’ll do my best.
I have also submitted a new proposal [8] which updates wording for several of the proposed SC in light of comments on the list, in the straw poll, and in last week’s call.
Responses to the proposal for GL 3.1 have identified several problems with the proposed L1 SC3. the wording I submitted last week [1] requires that a “description of the education level of the intended audience is available.”
1. As I noted in the Guide to GL 3.1 L1 SC3, and as Professor Watanabe and Lisa Seaman have pointed out, there are people with disabilities, including reading disabilities, at every education level.
2. Jason has argued that satisfying this SC in its present form would require detailed and verifiable data about the intended audience. Authors may not be able to provide these data, whether because the data are not actually available or for possible legal reasons.
3. Gregg has noted that this is primarily a “reporting” requirement. Merely reporting the expected education level of the intended audience would not affect the accessibility of the content, , so the SC should be removed or rewritten so that it enhances accessibility.
A fourth problem (which hasn’t surfaced in the comments I’ve seen so far) is that the SC as written does not speak to the difficulty or ease of the Web content but rather to the education level of the audience.
These are compelling arguments. There are indeed people with disabilities at all education levels. There is also extreme variance in education levels from country to country (and from region to region within the same country—or the same city, for that matter). The education level of the population in the aggregate is not necessarily a reliable predictor of reading ability, as the US National Adult Literacy Survey and the International Adult Literacy Survey have found. These seemed to me good arguments *for* using education level as a measure. But I can no longer reconstruct my rationale for this, and that suggests that it must have been more tenuous than I thought at the time.
I’m persuaded by the points listed above that the SC as I proposed it originally doesn’t actually work. I do believe, however, that educatioin level is the right “unit of measure,” as I’ll try to explain below. In the updated proposal [8], L1 SC3 has been reworded as follows:
A measure of the education level required to read the content is available.
</GL 3.1 L1 SC3 new>
The proposed change shifts the focus of the SC from the intended audience to the Web content. This should effectively address Jason’s concern (item 2 above), as well as the concern raised by Professor Watanabe and Lisa Seaman (item 1) and my own point that the SC as originally proposed is not about the Web content. (The proposed revision also makes the L1 requirement more consistent with L2 SC3 and L3 SC5 and 6.)
The proposed revision doesn’t immediately address Gregg’s point about the ineffectiveness of mere reporting requirements. However, I believe that content that satisfies the revised success criterion could support users who rely on content selection techniques, whether manual or automated, to reduce the difficulty of their reading tasks. How useful this would be might depend on the availability of user agents that support content selection and whether the required description was provided in metadata or in the content.)
Education level is a useful unit of measure for text because the results of applying readability formulas are usually expressed in terms of the education level requird to read the content. As I’ve explained in the Guides and on the call, readability formulas predict how easy or difficult it will be for readers to “decode” the text—that is, to recognize individual words and sentences. This corresponds well to the impact of many reading disabilities.
(Note: even for languages for which there are no computerized readability formulas, formal education relies on careful sequencing of texts and reading tasks based on the difficulty of vocabulary and syntax. In some countries there are lists of words (or characters) that students are expected to learn at each grade level.)
Tim Boland asked (in the Straw Poll) whether there is some meaningful international vocabulary for describing education level. There is: it’s called the International Standard Classification of Education; it was published in 1999 by the Organization for Economic Cooperation and Development to enable international comparison of education systems, so it seems useful for our purposes. References are available in the Guide.
Roberto Scano expressed some doubts (straw poll) about the specific education levels mentioned in L2 SC3 and L3 SC5 and 6. The L2 requirement to provide alternative versions of difficult text content is triggered by text that requires more than 9 years of (education level= upper secondary)schooling. One way to satisfy L2 SC is to provide a text summary that can be read by people with fewer than 7 years of school (education level= primary) The reasons for choosing these levels are explained in the Guides (available at [1]).
Lisa Seaman raises some interesting points about how to determine the appropriate “reading age”/education level. Lisa writes that pages providing critical information (e.g., about people’s civil rights, or about health and safety, etc.) should be written for a low reading age/education level no matter *where* they occur in the site hierarchy. I agree, and this suggests that we might need a different approach than the one I’ve proposed in L3 SC5 and 6, which are based on position in the hierarchical organization. Lisa writes:
The SC proposed for GL 3.1 L2 SC1 reads as follows: "A mechanism is available for finding definitions for all words in text content."
Professor Watanabe expresses concern that the phrase “a mechanism is available” is ambiguous (likewise the substitute phrases I suggested in the Guide doc, “available to the user” or “has been implemented”). Professor Watanabe is concerned that this SC makes demands on the *author* to provide features that might better be provided by the *user agent.* Becky Gibson raised a similar concern in the straw poll.
I think the term “mechanism” is appropriate precisely because it leaves room for different ways of satisfying the requirement: it allows for the possibility that the user agent can supply the necessary functionality (for example through extensions such as the toolbars available from dictionary.com, Merriam-webster.com, etc., where these are sufficient). Where such tools are not available, then the burden would indeed fall on the author. In (X)HTML there are several useful mechanisms, including dictionary lists, the definition tag, and the link element(rel=”glossary”). The aronym and abbr elements are mechanisms available for L1 SC2 (which requires a mechanism for finding the meaning of acronymns and abbreviations.
Joe dismisses the idea of a “dictionary cascade” (pwhich appears in the Guide docs as a possible technique for satisfying L1 SC2, L2 SC1, L3 SC1 and 2) as a “fantasy” and asks for the URL of an existing implementation. I am nto aware of an existing implementation. I included the idea in the Guide in hopes of stimulating inventiveness somewhere out in the Web community. If no implementation appears by the time we go to Candidate Recommendation, we will need to ensure that the wording of the SC does not require the existence of such an implementation, and may need to revise the Guide accordingly.
We may want to consider finding a way to exempt content where there is no online dictionary for the primary natural language of the content. We may also want to consider exempting foreign passages (as defined in L2 SC2) from this requirement, ; we could also make this a L3 requirement or an advisory technique for going beyond the requirements. We may also need to say explicitly somewhere that content which doesn’t include acronyms or abbreviations (for whatever reason) satisfies L1 SC2 by default, and similarly that Sign language content satisfies L3 SC2 (pronunciation) by default. (These are concerns that Joe Clark raised.)
Professor Watanabe argues that the first three requirements under L3 should be promoted to L2 because they are equal in importance to L2 SC1. Those requirements are worded as follows in the updated proposal [8]:
<blockquote>
1. A mechanism is available for identifying specific definitions of words used in an unusual or restricted way, including idioms and jargon.
2. A mechanism is available for finding the pronunciation of all words in text content.
3. Section headings and link text are understandable when read by themselves or as a group (for example in a list of links or a table of contents).
</blockquote>
I have no objection to promoting SC1 and 2 to Level 2. In fact, the pronunciation requirement is at L2 in the current internal workding draft. In the 11 February internal working draft, GL 3.1 L2 SC1 reads as follows:
<current>
The meanings and pronunciations of all words in the content can be
programmatically located.
</current>
I separated “meanings” and “pronunciations” into separate SC because they may call for very different techniques. (I also changed “meanings” to “definitions” for precision’s sake). I moved the pronunciation requirement to L3 because it also seemed to me more difficult to satisfy, but difficulty is not necessarily a good argument for this move. I also agree that pronunciation support is very important for people with certain reading disabilities, and it is essential in certain languages (such as Asian languages, Hebrew, Arabic, etc.).
Lisa Seaman comments that the proposal does not address pronunciation issues for Hebrew and Arabic. I agree that it does not do so explicitly. The proposed wording would require authors to provide pronunciation information for Hebrew and Arabic content. Techniques for doing so are appropriately described in the Techniques documents and/or in the Guide.
I agree with Professor Watanabe that it makes sense to promote L3 SC3 to Level 2, on the grounds that understandable link text and section titles are vitally important. This move would also have the advantage of putting this SC at the same level as a related SC under GL 3.2 (GL 3.2 L2 SC6, which requires that the destination of any link is available in link text or is programmatically determinable). Makoto suggested in the straw poll that we promote this SC to L1, but I don’t think we can do that because it imposes constraints on the default text content.
Joe Clark [4], [5] has expressed very strong objections to the requirements for alternative versions in L2 SC3 and L3 SC 7, 8, and 9. He argues that these requirements infringe upon freedom of expression, especially in the case of personal Web logs and personal home pages. “Let them write any way they want,” he says.
Authors of Web logs, personal home pages, and any other content *can* “write any way they want” at Level 1. At Level 2 they can continue to “write any way they want” but must *also* do at least one other thing. They can provide a summary written in a more readable style); or, if they choose, they can provide graphical illustrations; or they can run the text through a program that will convert it to synthetic speech and save the file, which can then be made available as a link. (A number of such applications can be downloaded at no charge, and there are no license fees for making the recorded text available.) In short, at Levels 1 and 2 *there is no requirement* that authors change the way they write.
It’s only at L3 that authors are required to write in a simpler, more readable style and to provide alternative versions. At L1 and L2 there are no constraints on the difficulty of text content.
Joe raises an important question about how the proposed SC would affect literary expression. My Ph.D. is in English and American literature, and I’ve taught in the fields of literature, writing, and rhetoric for 30 years; I’ve thought very hard about these issues and have challenged poets and other literary types to explore accessibility as an aspect of creative expression and not just a set of barriers to it. [9] Joe writes:
“Other literary forms [in addition to blogs and home pages] must be similarly protected. The last thing I want is for _A Clockwork Orange_ or Kathy Acker's collected works to flunk WCAG if they ever get posted on the Web.
I agree that literary and artistic freedom of expression should be protected. But protecting literary expression doesn’t make the printed text of A Clockwork Orange, or Finnegans Wake, or Madame Bovary, or The Nose, or Paradise Lost, or The Adventures of Huckleberry Finn, or A House for Mr. Biswas, or Don Quixote, or Labyrinths, or Once on a winter’s night a traveler, etc., etc., etc.) directly accessible to someone with a reading disability.
There are a number of organizations and practices that start from a different position. Recording for the Blind and Dyslexic, for example, distributes audio books to individuals who provide evidence of print-related disability. The organization was originally called Recording for the Blind and limited its services to that population. In recent years they’ve changed their name and broadened their service to include people with other print-related disabilities. I believe the same is true of the Library of Congress’ national Library Service for the Blind and Print-Handicapped. RFB&D has moved from its original long-playing records (33-1/3 rpm vinyl! Invented expressly for this purpose) to cassette tape, electronic text, and, most recently, Digital Talking Books; NLS is going to DTB as well, though more slowly. At colleges and universities throughout the Unitd States, students with documented reading disabilities are entitled to have print materials converted to alternative formats, including audio tape and, recently, DTB and other e-text formats. Assistive products like Wynn (Freedom Scientific) and K3000 (Kurzweil) use OCR and other technologies to convert printed materials to electronic text which is then spoken through a speech synthesizer, while karaoke-style highlighting shows how the text display and the audio are synchronized. VisuAide (a Canadian company that just merged with PulseData/Humanware) manufactures Digital Talking Book players, as do a number of other vendors; these serve people with all sorts of reading disabilities who benefit from audio access to otherwise inaccessible print materials. (And Neuromancer, a 1984 novel by Kathy Acker’s cyverpunk colleague William Gibson, recently appeared on the list at Bookshare.or, another organization dedicated to providing Digital Talking Books for the benefit of readers with a variety of print disabilities.) b
People with reading disabilities often need audio versions of text. That is why I’ve proposed this as an option at L2 and a requirement at L3. For authors who own the copyright of their content there are no barriers to providing an audio version. Anyone other than the copyright holder who posts copyrighted literary texts to the Web without obtaining the permission of the copyright holder is letting him- or herself in for serious legal difficulties; if the publisher owns the copyright, the publisher may also have the right to produce an audio version. (Check out www.booksontape.com and audible.com for some of the thousands of trade titles available in audio format. See Recording for the Blind & Dyslexic and Bookshare.org for titles available only to members who have provided documentation of reading disability.)
Joe also argues against the proposed SC (L3 SC9) for Sign language video of important content. Joe writes:
“I don't think I can make this any clearer: If we require sign-language translation, then we have no leg to stand on if somebody comes along and demands Ukrainian-language translation, since in both cases the "disability" being overcome is an inability to understand the source language. While this constitutes accessibility in a broad sense, it isn't the kind of accessibility we have to care about.”
There is ample precedent for such requirements in non-Web environments. In both Canada and the United States, for example, there are laws which require that Sign language interpretation of spoken-word events is provided so that people who are Deaf may access and participate in those events.
Finally, Joe also argues that there is no reason to require Sign because people who are Deaf have greater freedom to produce Web content, including Signed content, than ever before:
“The Working Group continues to ignore, but will not be able to overcome, my objection that we cannot require translations, including translations into sign language. It is quite possible for sign-language speakers to author their own pages in sign language (though getting the HTML to pass the validator is not easy). In fact, I think there are fewer barriers to self-expression in sign language than ever before: You don't need your own TV show and you don't need to mail tapes around. All you need is a camera and a server. “
Its’ true that it is easier to produce Signed content than it has been in the past. if people participating in the exchanges that Joe mentions want (or are required) to demonstrate WCAG conformance, then (in my view) it would be necessary for the content to satisfy the success criteria for the chosen conformance level, including provisions relating to text alternatives for non-text content.
Accessibility is not just a one-way street. WCAG 2.0 is not just a set of requirements imposed on people *without* disabilities to make them do things for people *with* disabilities. The same requirements apply to content authored *by* people *with* disabilities, to enable *communication*.
Lisa also notes that the proposed SC do not really address the needs of people with mild autism, Asberger’s syndrome, and other language-related disabilities that limit the ability to understand what the literature on the subject calls “implied meaning”—i.e., irony, sarcasm, puns, various emotional nuances, etc. This is correct.
Lisa has proposed various techniques for addressing these needs through markup, including both RDF and, more recently, the kind of semantic markup that Joe has been advocating, which uses (X)HTML markup coupled with CSS to describe document semantics. These ideas are worth considering. It’s not clear to me whether we would need additional success criteria; we might be able to address these issues in the context of existing SC and in the Guide.
[1] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0368.html
[2] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0379.html
[3] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0479.html
[4] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0467.html
[5] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0536.html
[6] http://www.w3.org/2002/09/wbs/35422/meaning0516/[7] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0487.html
[8] http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/0557.html
[9] http://www.cwrl.utexas.edu/currents/fall01/slatin/slatin.html