- From: Jason White <jasonw@ariel.ucs.unimelb.EDU.AU>
- Date: Sun, 21 Sep 1997 10:55:49 +1000 (AEST)
- To: WAI HC Working Group <w3c-wai-hc@w3.org>
Once again taking Al's lead, there is no doubt that greater flexibility is achieved by classifying dictionaries in terms of the types of data which they contain. Thus, if the user agent is informed (via REL and/or CLASS), in advance of fetching the dictionary file, of the kinds of entries which it offers, then the software can decide, on whatever basis is most appropriate, whether to access the resource and, if so, how to process it. Thus, a speech-based user agent might be configured to respond to dictionaries containing abbreviations and/or pronunciation data, which would then be relied upon during the conversion of the associated document into an audio rendering (audio formatting). If other members of this WG are supportive of this solution, then it merely remains for us to work out the implementation details so that a precise recommendation can be made. What we need, I think, is a key word such as DICTIONARY, followed by a provision for a list of dictionary data types to be given (either in the REL attribute along with the DICTIONARY key word, or in the CLASS attribute). There is no need to define exhaustively the types of dictionary data for which designators should be reserved, but it would be best to name a few, such as "abbreviation", "pronunciation" (or "phonetic"), "definition", "etymology", etc. We should highlight those which are of most benefit to people with disabilities.
Received on Saturday, 20 September 1997 20:55:58 UTC