Re: 3.1: Proposal with links to Guide docs

First of all, I really appreciate Prof. John's great work of rewriting
GL 3.1.  This new guideline is language independent!

I have not yet finished reading Guide of GL 3.1 
but here are my brief comments to the new GL 3.1:

1) As Lisa wrote in 
http://lists.w3.org/Archives/Public/w3c-wai-gl/2005AprJun/att-0379/00-part

<blockquote>
 On the other hand, the concept of intended audience and disabilities
 is vast and tricky - some one who is has a high level of education
 can still suffer a stroke that will affect basic language disability
 -but not overall intelligence.
</blockquote>
 
Is the education level a good measure of readability problem?


2) L2 SC1 "A mechanism is available for finding definitions for all
words in text content."

I think the phrase "a mechanism is available" is ambiguous.
Phrases such as "mechanism has been implemented" or "is available to
the user." also are ambiguous to me because these phrases are seemed
to demand the author to provide that mechanism.

In addition to that, in the Guide Doc, 
Technology-Independent techniques are explained as follows:
<blockquote>
・Provide a form that searches an online dictionary in the language of
  the content.

・Provide a "dictionary cascade" to search a list of dictionaries
  and glossaries in a specified order.  This technique associates a
  list of dictionaries with a delivery unit so that users can find
  definitions for all words in the text.  The "cascade" should list
  the dictionaries in the order most likely to bring up the right
  definition. 
</blockquote>

Does the author really need to provide these functions to their Web
pages?  Or these techniques should be implemented by a user agent?


3) I think L3 SC1, L3 SC2, and L3 SC3 should be L2 SCs 
because the importance of these SCs are equal to L2 SC1.

Or we may do not need three levels but two levels, minimum level and
go beyond it.


4) The two-letter language code for Japanese is "ja."  :-)


On Wed, 4 May 2005 05:25:46 -0500 "John M Slatin" <john_slatin@austin.utexas.edu> wrote:

> The following HTML documents are attached to this message:
> A proosal for rewriting GL 3.1
> Guide docs for all but the last SC in the proposal
>  
> The Guide docs define the terms used in the SC, and also include
> Benefits and Examples (these aren't in the Guideline proposal yet, so
> you'll need to find them in the Guide for each SC; there's a link to
> each Guide under the appropriate SC).
>  
> I'll send the issue summary attached to a separate message when I get to
> the office-- forgot to send it to myself before I left yesterday
> evening.
> One of my major goals in this work on 3.1 has been to address legitimate
> concerns about the fuzziness of WCAG 1.0's guideline (14) about writing
> "clearly and simply" and the weakness and apparent arbitrariness of the
> "strategies for reducing complexity" that are currently lumped together
> under 3.1 L3 SC3-- a baggy mess that no one really wants to touch. I've
> done away with L3 SC3 and replaced it with several other SC at levels 1
> and 2 as well as level 3.
>  
> I started with the basic premise that we can't talk about "clarity" and
> "simplicity" because those aren't measurable.  The problem was then to
> find something about text that is (a) measurable and (b) meaningful with
> respect to accessibility.
>  
> Much to my surprise, I've found myself concentrating on the idea of
> measuring "readability"  and relating it to the expected education level
> of the intended audience.  Readability formulas have been around since
> the end of WOrld War II and have been extensively discussed, twisted,
> turned-- and widely used in education, certain industries (insurance,
> for example, and public health), and by some governments.  They tend to
> be held in contempt by peole with literary training like mine (which is
> why I surprise myself by dealing with this).
>  
> But they turn out to be surprisingly useful for our purposes.
> Readability formulas basically look at two things-- word length and
> sentence length.   These are treated as measures  measures of semantic
> and syntactic complexity, respectively, and used as predictors of how
> easy or difficult a given block of text will be to rad.
>  
> This is actually waht makes them useful for our purposes: people with
> reading disabilities tend to have trouble "decoding" words and sentences
> (i.e., a significant amount of effort goes into word-recognition, often
> at the expense of the energy required for understanding).  And
> readability formulas appear to be good primarily for predicting how much
> effort it will take to *decode* a piece of text-- that is, to recognize
> the words.  So they may help content authors find ways to write text
> that is "readable" in this sense.  And since the results of readability
> testing are often expressed in terms of education level (for example, a
> text tests at 8th grade level, or 10th grade, or university level, or
> whatever), we can use the idea of education level as well.
>  
> I've also tried to do some work with the notion of alternative
> representations of text content-- the flip side of alt text, in a sense.
> So at levels 2 and 3 there are SC that call for graphical and/or
> spoken-word representations of information otherwise presented in text,
> and at L3 there is a success criterion that calls for signed video of
> key pages and/or passages.
>  
> If adopted as-is, the proposal would close about 25 of the 63 bugs
> listed in the issue summary.  But that's for another message.
>  
> Proposal and guides attached.
>  

-- 
WATANABE Takayuki <nabe@lab.twcu.ac.jp>
Tokyo Woman's Christian University

Received on Thursday, 12 May 2005 15:20:56 UTC