- From: John M Slatin <john_slatin@austin.utexas.edu>
- Date: Tue, 1 Nov 2005 11:42:37 -0600
- To: "Lisa Seeman" <lisa@ubaccess.com>, "W3c-Wai-Gl" <w3c-wai-gl@w3.org>
- Message-ID: <6EED8F7006A883459D4818686BCE3B3B02505996@MAIL01.austin.utexas.edu>
Lisa and Bengt have proposed a change to GL 3.1 L3 SC1. Lisa writes: <blockquote> change: A mechanism is available for finding definitions for all words in text content. to: A mechanism is available to determine the meaning of each word or phrase in the content The difference is that the user can pinpoint the intended definition, and not just point to a set of possible definitions. Ambiguous words are a big problem for people with cognitive disabilities, pointing them to a set of definitions doesn't help them and only puts more of a burden on the author. If we do want to include a SC about finding definitions of words, it should be about the exact definition and not a set of definitions. </blockquote> Thanks for putting this forward. I'm concerned about requring a mechanism to "determine the meaning" of words or phrases in the content. I don't think it's testable, especially with respect to phrases. My training is in literary studies. There are certain phrases whose meaning literary scholars and critics have been arguing about for centuries. Those arguments will go on forever because the phrases in question are metaphorical-- the metaphor *is* the meaning. As recently as the 19 November 2004 working draft (http://www.w3.org/TR/2004/WD-WCAG20-20041119/), GL 3.1 talked about determining meaning, both in the Guideline itself and in at least two L2 success criteria. We adopted new wording for the 30 June 2005 draft, based on discussion at the Brussels face to face, where I argued that requirements about determining meaning are untestable whereas requirements about definitions can be tested. So I think it would be a bad idea to revert to an untestable requirement. Lisa says that there should be a requirement about finding "intended definitions" rather than merely presenting users with a list of available definitions. There is such a requirement at GL 3.1 L3 SC2 in the 30 June 2005 draft. However, it applies only to words used in an unusual or restricted way. It uses the phrase "specific definitions" instead of "intended definitions" because authorial intent is not testable. I'm prepared to agree that the wording of GL 3.1 L3 SC1 that appears in the 30 June draft is both too demanding (for authors) and not helpful enough (for the users we're aiming at), so it should go. But I don't think the proposed wording quite works. JOhn "Good design is accessible design." Dr. John M. Slatin, Director Accessibility Institute University of Texas at Austin FAC 248C 1 University Station G9600 Austin, TX 78712 ph 512-495-4288, fax 512-495-4524 email jslatin@mail.utexas.edu Web <http://www.ital.utexas.edu/> http://www.utexas.edu/research/accessibility -----Original Message----- From: w3c-wai-gl-request@w3.org [mailto:w3c-wai-gl-request@w3.org] On Behalf Of Lisa Seeman Sent: Tuesday, November 01, 2005 3:32 AM To: W3c-Wai-Gl Subject: action item: 3.1 Ls SC1 proposal This is a proposal from Yvette Bengt and myself with some help from Gregg change: A mechanism is available for finding definitions for all words in text content. to: A mechanism is available to determine the meaning of each word or phrase in the content The difference is that the user can pinpoint the intended definition, and not just point to a set of possible definitions. Ambiguous words are a big problem for people with cognitive disabilities, pointing them to a set of definitions doesn't help them and only puts more of a burden on the author. If we do want to include a SC about finding definitions of words, it should be about the exact definition and not a set of definitions. Guide information Ambiguous use of language creates problems with translation, misunderstandings and accessibility for cognitive disabilities.Translation to symbolic languages or simpler language for cognitive disabilities can not be automated. Use of a controlled language solves this problem but restricts author's ability to stylize and express them-selves. Referencing textual content, it's meaning becomes unambiguous, translatable and machine-readable without restricting the author's use of language. Techniques: (Note I need to double check the techniques. I they are not edited until the group approve the SC, In general they need more full examples - I volunteer to do that if the SC is approved) Also -- if the group like this then I can also add more techniques on how to use cascaded dictionaries, which speeds it up) 1: The total text is based on a controlled vocabulary such as VOA's or BLISS(for cognitively disabled) in which case that the complete text can be marked with which dictionary it is based on. (This is regularly done with translation services and their TM (translation memories). 2, CCF is a technique to access the meaning and also to access alternative vocabularies that may be languages or symbols sets. See www. <http://www.conceptcoding.org> conceptcoding.org 3 HTML <link rel="definitions" scr="mysite.com/my-prefered-usages.html"> <link rel="definitions" scr="mysite.com/my-page-usages.html"> <link rel="definitions" scr="dictionary.com/dictionay1.html"> You can then add an inline link to any usages that change the rules. With Css classes these links need not be rendered unless requested by the user 4 XHTML 2.0 technique usage examples <span role="_:Jon">He</span> has brown eyes. 5, XML technique usage examples Any word or phrase or even a part of a word in the content can be pointed to by an xpointer and a reference to the meaning or the dictionary where it is defined in can be given. 6, RDF technique usage examples In the following examples rdf is used to provide a link a phrase or word to a definition. This makes the text unambiguous. <rdf:Description rdf:about="xpointer to text"type =ub:accessibilityAnnotation> < ub:lexicon >wordnet/~wn/consept#10293829</ ub:lexicon > </rdf:Description > In the following examples rdf is used to provide a link a phrase or word to a summary and picture. This makes the text understandable. <rdf:Description rdf:about="some xpointer to obtuse legal paragraph" type ="ub:accessibilityAnnotation"> <ub:AlternativeContent > <ub:profile>simplified</ub:profile> <bag> "x"> <rdf:li><ub:summary value="we own you from now on"></rdf: li> "x"><rdf:li><ub:nonTextvalue="picture_of_ slave_in_chains.gif"></rdf:li> "x"></bag> </ub:AlternativeContent > </rdf:Description > Language specific notes: The Dutch language has two features that make it potentially more complex than English: 1 There are a lot of foreign (English) words and phrases. Not all of these will be in the Dutch dictionary but you could point to both a Dutch and English dictionary or other method to determine the meaning. This would not be a problem for 3.1 L3 SC 1. 2 The Dutch glue words together to form new ones (we can create words like 'swordmakersworkshopdoorhandle'). These words will not be in a dictionary and their meaning cannot be programatically determined unless you hand-code every instance. I don't think we want to require that so that is a problem with your proposals. Their meaning can be determined manually using a dictionary though. You just try to look up the whole word and if you don't find it, look for the longest bit that is in the dictionary (swordmaker), and then look up the rest the same way. You have to know the rules about glueing them together (adding the extra 's') but people who know Dutch know that. Pointing to a dictionary that has the 'base' words would conform to this SC. Even though the meaning of combined words is not programmatically determinable, the user will have a mechanism to find out their meaning. Swedish, is similar, but any new combination that does not exist is not valid until listed in SAOL (Swedish Academy wordlist). Any new compound word in Swedish is easily recognized, due to the strict rules of their making. A new Swedish Associative Lexicon has been made (by a researcher) and it resembles wordnets where the different meaning carrying part has typed relations such as hypernyms with different weights depending on the major/minor meaning carrying part. This is only about compound words. In Hebrew there are seven diactric marks that alter the vowel sound of a character and also its meening. Hebrew sites can point to on online decode to determine the diactric marks. In cases where the automated guess is incorrect enough diactric marks need to be added to enable the correct automated decoding of the word. Examples: This example is a CMS ( content management system) that has been expanded with conceptcoding: http://www.symbolnet.org/symbered-demo/stories/my_first/ click on user preference in the bottom and check any language and a symbol set. This only works on browsers that implements Ruby Annotations correctly sofar in IE some info is placed in the wrong places. Firefox works with the following plugin: http://piro.sakura.ne.jp/xul/_rubysupport.html.en#download enabled. . All the best Lisa Seeman www.ubaccess.com <http://www.ubaccess.com/>
Received on Tuesday, 1 November 2005 17:43:04 UTC