W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > October to December 2004

RE: [Techniques] Drft General Technique for GL 3.1 L2 SC1

From: Gregg Vanderheiden <gv@trace.wisc.edu>
Date: Tue, 28 Dec 2004 12:40:35 -0600
To: "'John M Slatin'" <john_slatin@austin.utexas.edu>, <w3c-wai-gl@w3.org>
Message-ID: <auto-000198992846@spamarrest.com>
Hi John,
For ease - i will just put numbers below in brackets and put the answers up
1)  Yes - this whole guideline (and all of the programmatically located
provisions) is based on the cascading dictionary concept (or a comprehensive
dictionary).    When we put these in we said that they could stay only as
long as we were able to solve that.  but if we arent - then we need to
remove them -- not change them to programmatically determined.   That has
not been discussed at all.
2) RE User agent issue.   There is a user agent component to this in that
the user agent would have to do something with the information provided.
But user agents cannot distinguish between different definitions in
different locations or provide definitions for domain specific words like
"StickyKeys" etc.  the Author would have to provide pointers to sources of
definitions and pronunciations. 
3) RE comment regarding pronunciation being in the SC. --  the question
wasnt whether pronunciation was in the SC.  You are correct - it was.  but
it was only programmatically located - not programmatically determined.
that was the problem. 
4) We could separate pronunciation and definition.  That wouldnt make sense
at Level 1 but it could at level 2 if we allow partial declarations which i
think we will.  they would mostly be solved with the same mechanism in most
cases-- but not necessarily all. 
5)   RE 'Phrase" vs "run of text".   The problem is that "run of text"
includes both words and phrases.  the SC only covers words.  so the Phrases
part of 'run of text' is the part that is beyond the SC.
6)  you are correct about Japanese.   They don't really have what we call
'words' though they do have characters and sentences.   Hmmm.  interesting
problem.  ALso in Dutch we have an ambiguity since they create words out of
too much to think about between holidays !!     
John - you are a slave driver. 


 -- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Ind. Engr. & BioMed Engr.
Director - Trace R & D Center 
University of Wisconsin-Madison 



From: John M Slatin [mailto:john_slatin@austin.utexas.edu] 
Sent: Tuesday, December 28, 2004 11:45 AM
To: Gregg Vanderheiden; w3c-wai-gl@w3.org
Subject: RE: [Techniques] Drft General Technique for GL 3.1 L2 SC1

Gregg wrote:


I think the techniques here though differ from the  SC that they are tied


The proposed text is both more and less than the Success criterion cited


It is more in that the SC only requires that pronunciation be locatable.  If
there are multiple pronunciations - the guideline does not require you to
say which one is appropriate (that would be 'programmatically determined'.
So much of the text below does not apply.  At least not for this SC.

I'd be grateful to anyone who can give us a technique for programmatically
locating pronunciation information for all words in the content!
I suppose one technique could be to write a script that automatically
submits a user-selected word to an online dictionary and goes directly to
the section of the dictionary entry that describes pronunciation-- like the
way the Valid HTML icon sends the page to the HTML Validation Service.
Some of the online dictionaries offer free toolbars that allow the user to
key or paste in a word which is then automatically submitted to that
particular dictionary site.  Does the availability of such tools make this
criterion unnecessary? (This goes back to a question Wendy raised several
months ago shen she asked if this was a user agent issue and not a content



Gregg continues:


The Level 3 success criteria for this guideline does specify that the
meaning should be indicated but not the pronunciation.   But that would not
go here for this Level 2 SC and it is meaning - not pronunciation (though
you might be able to work backward).
[jms] </blockquote>


I don't understand. Level 2 Success Criterion 1, as it appears in the 19
November public working draft,  specifically talks about meanings and


The meanings and pronunciations of all words in the content can be
programmatically located.




I'm not certain, but it might be a good idea to separate meanings and
pronunciations into separate success criteria (i.e., modify the guideline)
so we can clarify exactly which techniques are appropriate/required for
pronunciation and which are appropriate/required for meaning.




[jms] Gregg again:

  It is also more since it requires this for phrases.  But the SC in
consideration is only for words


 Sorry, Gregg, but I don't see where the draft below imposes requirements
for "phrases."  It does talk about a "run of text," a phrase I lifted
directly from the Ruby Annotation specification
(http://www.w3.org/tr/ruby/).  A run of text may be a single word, or, in
East Asian languages, a single character; it *could* be a phrase, I guess,
but I was careful not to use that word in the draft technique.  (I did use
the word "phrase" in another draft technique about language-changes, so
maybe that's what you're thinking of?) 




[jms] Gregg again:

  It is less than guideline because it says "where meaning depends on

Yet this SC does not restrict itself to any particular words or phrases.  It
requires that all words be locatable. 



You're right; thanks. [jms]  It occurs to me that this SC, as written, may
be problematic for languages like Japanese, where it may be necessary to
provide information about how specific *characters* should be pronounced. So
here again we may need to revise the success criterion to make room for
languages that operate very differently from English (or other languages
that use a Roman alphabet, etc.).




 Gregg again:


Remember - programmatically locatable just means that  you get a list of
meanings and the correct one is there somewhere.  (like when you look a word
up in a dictionary)


You need 'programmatically determined' to have it tell you which meaning (or
pronunciation) is the correct one of the bunch. 


I understand. Frankly I don't know how to make this sort of information
programmatically locatable, and I put this out here in the hope that some
brilliant person on the list would offer a solution (and write up the
technique!) <grin>



Thanks  again

[jms] </blockquote>

You're welcome! 
[jms] John 
 -- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Ind. Engr. & BioMed Engr.
Director - Trace R & D Center 
University of Wisconsin-Madison 


From: w3c-wai-gl-request@w3.org [mailto:w3c-wai-gl-request@w3.org] On Behalf
Of John M Slatin
Sent: Monday, December 27, 2004 5:40 PM
To: w3c-wai-gl@w3.org
Subject: [Techniques] Drft General Technique for GL 3.1 L2 SC1


The proposal below is part of the first draft of material for the General
Techniques for Guideline 3.1, the guideline that contains some key
requirements about language use. It's my hope that this material can be
included in the next internal working draft, and that it will eventually
make its way-- duly modified and corrected-- into the next public working


Guideline 3.1 L2 SC1: requirs: 


The meanings and pronunciations of all words in the content can be
programmatically located. 



It would be very helpful if people with knowledge of writing systems for
languages that do not use Roman or romanized alphabets would review and make
suggestions for corrections, additions, deletions, etc.




Short-name for this technique:
Pronunciation for users

Information about the pronunciation of a run of text is explicitly
associated with the 

run of text where meaning depends on pronunciation.


There are many languages in which a run of text may mean different things 

depending on how the text is pronounced. This is common in East Asian 

languages as well as Hebrew, Arabic,  and other languages; it also occurs in

English and other Western European languages.  Users with disabilities that

it difficult to use contextual cues as a guide to pronunciation and meaning

when information about how to pronounce potentially ambiguous text is


Techniques for associating content with information about pronunciation vary

depending upon the type and language of the content. For example, Ruby 

Annotation is appropriate for indicating pronunciation in some languages,
such as 

Japanese, Chinese, and Korean.  However, Ruby may be unnecessary in 

languages where Unicode fonts can include diacritical marks that 

convey pronunciation.


Ruby Annotation allows the author to annotate a "base text," providing both

guide to pronunciation and, in some cases, a definition as well.  Ruby is

used for text in Japanese and other East Asian languages.  Ruby Annotation

defined as a module for XHTML 1.1.


There are two types of Ruby markup: simple and complex. Simple Ruby markup 

applies to a run of text such as a complete word or phrase. This is known as

"base" text.  The Ruby annotation that indicates how to pronounce the term

usually displayed immediately before the base text, and is shown in a
smaller font. 

(The term "Ruby" is derived from a small font used for this purpose in

texts.)  Simple Ruby markup also provides a "fallback" option for user
agents that 

do not support Ruby markup.


Complex Ruby markup makes it possible to associate a single base text with

than one annotation.  In such cases, the first annotation would typically 

indicate pronunciation and the second would provide the meaning.  Complex

markup also makes it possible to divide the baste text into smaller units,
each of 

which may be associated with a separate Ruby annotation.  Complex Ruby 

markup does not support the fallback option.


Note: The primary reason for indicating pronunciation through Ruby or any

means is to make the content accessible to people with disabilities who can

and understand the language of the content if information about
pronunciation is 

provided. Creating explicit association between the content and the

information ensures that pronunciation information remains available if the 

presentation format is adapted to meet the user's needs. 


Editor's note: Complex Ruby markup may be sufficient to satisfy this success

criterion when pronunciation and meaning are provided in separate
annotations of 

the same base text


Editor's Note: As an additional benefit, it has also been suggested that

Annotation might be used to make content accessible to people who use

languages together with or as an alternative to conventional text.  For
example, a 

symmbol image could be used as a Ruby annotation above a base text.  Such 

practices might benefit people whose speech or reading are impaired as the

of stroke or other injury to the brain, or from other causes. (See A.
Judson, M. 

Lundalv, B. Farre, and L. Nordberg, <a 

<http://dewey.computing.dundee.ac.uk/ccf/cop/#d0e876> >Concept Coding 

Framework</a>) However, the Ruby 1.0 Specification does not support use of 

images, so implementation of this suggestion would depend upon a change in

Ruby specification.
<a href="http://www.w3.org/TR/ruby/" <http://www.w3.org/TR/ruby/> >Ruby
<a href="http://ncam.wgbh.org/salt/guidelines/sec11.html"
<http://ncam.wgbh.org/salt/guidelines/sec11.html> >IMS Guidelines for 


Topic-Specific Accessibility</a>
HTML Techniques


<http://www.w3.org/TR/WCAG20-HTML-TECHS/#lang-att_change> >Identifyin


g language changes</a>
CSS Techniques
<a href="http://www.w3.org/TR/css3-ruby" <http://www.w3.org/TR/css3-ruby>
>CSS 3 Ruby</a>



"Good design is accessible design."

Dr. John M. Slatin, Director 
Accessibility Institute
University of Texas at Austin 
FAC 248C 
1 University Station G9600 
Austin, TX 78712 
ph 512-495-4288, fax 512-495-4524 
email jslatin@mail.utexas.edu 
Web  <http://www.ital.utexas.edu/> http://www.utexas.edu
<http://www.utexas.edu/research/accessibility> /research/accessibility 

Received on Tuesday, 28 December 2004 18:41:02 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:59:34 UTC