RE: [Techniques] Drft General Technique for GL 3.1 L2 SC1

Gregg wrote:
<blockquote>

 

I think the techniques here though differ from the  SC that they are
tied to. 

 

The proposed text is both more and less than the Success criterion cited
requires.

 

It is more in that the SC only requires that pronunciation be locatable.
If there are multiple pronunciations - the guideline does not require
you to say which one is appropriate (that would be 'programmatically
determined'.  So much of the text below does not apply.  At least not
for this SC.
</blockquote>


I'd be grateful to anyone who can give us a technique for
programmatically locating pronunciation information for all words in the
content!
I suppose one technique could be to write a script that automatically
submits a user-selected word to an online dictionary and goes directly
to the section of the dictionary entry that describes pronunciation--
like the way the Valid HTML icon sends the page to the HTML Validation
Service.
 
Some of the online dictionaries offer free toolbars that allow the user
to key or paste in a word which is then automatically submitted to that
particular dictionary site.  Does the availability of such tools make
this criterion unnecessary? (This goes back to a question Wendy raised
several months ago shen she asked if this was a user agent issue and not
a content issue.)
 

 

 

Gregg continues:

<blockquote>

The Level 3 success criteria for this guideline does specify that the
meaning should be indicated but not the pronunciation.   But that would
not go here for this Level 2 SC and it is meaning - not pronunciation
(though you might be able to work backward).
[jms] </blockquote>

 

I don't understand. Level 2 Success Criterion 1, as it appears in the 19
November public working draft,  specifically talks about meanings and
pronunciations:

<current>

The meanings and pronunciations of all words in the content can be
programmatically located.
</current>

 

I'm not certain, but it might be a good idea to separate meanings and
pronunciations into separate success criteria (i.e., modify the
guideline) so we can clarify exactly which techniques are
appropriate/required for pronunciation and which are
appropriate/required for meaning.

 

[jms] Gregg again:

<blockquote> 
  It is also more since it requires this for phrases.  But the SC in
consideration is only for words

</blockquote>

 Sorry, Gregg, but I don't see where the draft below imposes
requirements for "phrases."  It does talk about a "run of text," a
phrase I lifted directly from the Ruby Annotation specification
(http://www.w3.org/tr/ruby/).  A run of text may be a single word, or,
in East Asian languages, a single character; it *could* be a phrase, I
guess, but I was careful not to use that word in the draft technique.
(I did use the word "phrase" in another draft technique about
language-changes, so maybe that's what you're thinking of?)

[jms] Gregg again:

<blockquote> 
  It is less than guideline because it says "where meaning depends on
pronunciation"

Yet this SC does not restrict itself to any particular words or phrases.
It requires that all words be locatable. 

</blockquote>

 

You're right; thanks. [jms]  It occurs to me that this SC, as written,
may be problematic for languages like Japanese, where it may be
necessary to provide information about how specific *characters* should
be pronounced. So here again we may need to revise the success criterion
to make room for languages that operate very differently from English
(or other languages that use a Roman alphabet, etc.).

 

 Gregg again:

<blockquote>

Remember - programmatically locatable just means that  you get a list of
meanings and the correct one is there somewhere.  (like when you look a
word up in a dictionary)

 

You need 'programmatically determined' to have it tell you which meaning
(or pronunciation) is the correct one of the bunch. 

 </blockquote>

I understand. Frankly I don't know how to make this sort of information
programmatically locatable, and I put this out here in the hope that
some brilliant person on the list would offer a solution (and write up
the technique!) <grin>

 

<blockquote> 

Thanks  again

[jms] </blockquote>

You're welcome! 
Gregg
[jms] John 
 -- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Ind. Engr. & BioMed Engr.
Director - Trace R & D Center 
University of Wisconsin-Madison 


  _____  


From: w3c-wai-gl-request@w3.org [mailto:w3c-wai-gl-request@w3.org] On
Behalf Of John M Slatin
Sent: Monday, December 27, 2004 5:40 PM
To: w3c-wai-gl@w3.org
Subject: [Techniques] Drft General Technique for GL 3.1 L2 SC1

 

The proposal below is part of the first draft of material for the
General Techniques for Guideline 3.1, the guideline that contains some
key requirements about language use. It's my hope that this material can
be included in the next internal working draft, and that it will
eventually make its way-- duly modified and corrected-- into the next
public working draft.

 

Guideline 3.1 L2 SC1: requirs: 

<current>

The meanings and pronunciations of all words in the content can be
programmatically located. 

</current>

 

It would be very helpful if people with knowledge of writing systems for
languages that do not use Roman or romanized alphabets would review and
make suggestions for corrections, additions, deletions, etc.

 

<proposed>

 

Short-name for this technique:
Pronunciation for users

Task
Information about the pronunciation of a run of text is explicitly
associated with the 

run of text where meaning depends on pronunciation.

 

Description
There are many languages in which a run of text may mean different
things 

depending on how the text is pronounced. This is common in East Asian 

languages as well as Hebrew, Arabic,  and other languages; it also
occurs in 

English and other Western European languages.  Users with disabilities
that make 

it difficult to use contextual cues as a guide to pronunciation and
meaning benefit 

when information about how to pronounce potentially ambiguous text is
available.

 

Techniques for associating content with information about pronunciation
vary 

depending upon the type and language of the content. For example, Ruby 

Annotation is appropriate for indicating pronunciation in some
languages, such as 

Japanese, Chinese, and Korean.  However, Ruby may be unnecessary in 

languages where Unicode fonts can include diacritical marks that 

convey pronunciation.

 

Ruby Annotation allows the author to annotate a "base text," providing
both a 

guide to pronunciation and, in some cases, a definition as well.  Ruby
is commonly 

used for text in Japanese and other East Asian languages.  Ruby
Annotation is 

defined as a module for XHTML 1.1.

 

There are two types of Ruby markup: simple and complex. Simple Ruby
markup 

applies to a run of text such as a complete word or phrase. This is
known as the 

"base" text.  The Ruby annotation that indicates how to pronounce the
term is 

usually displayed immediately before the base text, and is shown in a
smaller font. 

(The term "Ruby" is derived from a small font used for this purpose in
printed 

texts.)  Simple Ruby markup also provides a "fallback" option for user
agents that 

do not support Ruby markup.

 

Complex Ruby markup makes it possible to associate a single base text
with more 

than one annotation.  In such cases, the first annotation would
typically 

indicate pronunciation and the second would provide the meaning.
Complex Ruby 

markup also makes it possible to divide the baste text into smaller
units, each of 

which may be associated with a separate Ruby annotation.  Complex Ruby 

markup does not support the fallback option.

 

Note: The primary reason for indicating pronunciation through Ruby or
any other 

means is to make the content accessible to people with disabilities who
can read 

and understand the language of the content if information about
pronunciation is 

provided. Creating explicit association between the content and the
pronunciation 

information ensures that pronunciation information remains available if
the 

presentation format is adapted to meet the user's needs. 

 

Editor's note: Complex Ruby markup may be sufficient to satisfy this
success 

criterion when pronunciation and meaning are provided in separate
annotations of 

the same base text

 

Editor's Note: As an additional benefit, it has also been suggested that
Ruby 

Annotation might be used to make content accessible to people who use
symbolic 

languages together with or as an alternative to conventional text.  For
example, a 

symmbol image could be used as a Ruby annotation above a base text.
Such 

practices might benefit people whose speech or reading are impaired as
the result 

of stroke or other injury to the brain, or from other causes. (See A.
Judson, M. 

Lundalv, B. Farre, and L. Nordberg, <a 

href="http://dewey.computing.dundee.ac.uk/ccf/cop/#d0e876"
<http://dewey.computing.dundee.ac.uk/ccf/cop/#d0e876> >Concept Coding 

Framework</a>) However, the Ruby 1.0 Specification does not support use
of 

images, so implementation of this suggestion would depend upon a change
in the 

Ruby specification.
Resources
<a href="http://www.w3.org/TR/ruby/" <http://www.w3.org/TR/ruby/> >Ruby
Annotation</a>
<a href="http://ncam.wgbh.org/salt/guidelines/sec11.html"
<http://ncam.wgbh.org/salt/guidelines/sec11.html> >IMS Guidelines for 

 

Topic-Specific Accessibility</a>
HTML Techniques
<a 

 

href="http://www.w3.org/TR/WCAG20-HTML-TECHS/#lang-att_change"
<http://www.w3.org/TR/WCAG20-HTML-TECHS/#lang-att_change> >Identifyin

 

g language changes</a>
CSS Techniques
<a href="http://www.w3.org/TR/css3-ruby"
<http://www.w3.org/TR/css3-ruby> >CSS 3 Ruby</a>

</proposed>

 

"Good design is accessible design."

Dr. John M. Slatin, Director 
Accessibility Institute
University of Texas at Austin 
FAC 248C 
1 University Station G9600 
Austin, TX 78712 
ph 512-495-4288, fax 512-495-4524 
email jslatin@mail.utexas.edu 
Web  <http://www.ital.utexas.edu/> http://www.utexas.edu
<http://www.utexas.edu/research/accessibility> /research/accessibility 

 

Received on Tuesday, 28 December 2004 17:45:22 UTC