- From: Al Gilman <asgilman@iamdigex.net>
- Date: Fri, 07 Sep 2001 15:56:40 -0400
- To: Charles McCathieNevile <charles@w3.org>, Ivan Herman <ivan@w3.org>
- Cc: Marja-Riitta Koivunen <marja@w3.org>, WAI Cross-group list <wai-xtech@w3.org>
At 11:17 PM 2001-09-06 , Charles McCathieNevile wrote: >Loretta Guarino Reid asked me the following: > >Do you have a clear model for the way an SVG user agent should interact with >assistive technology with respect to Title and Description? If I were >linearizing an SVG document, should the linearized version include all titles >and/or descriptions? Should it stop descending the hierarchy as soon as it >encounters a description? Are descriptions attached to nodes of a complex >graphic each intended to be a stand-alone replacement for the attached node? >Should they assume the context of encompassing descriptions? > AG:: The best answers to the above questions are dependent on more information about the delivery context, and are not part of what the baseline SVG player is responsible for, in my rough working guess. ** summary The SVG player component needs to know how to lead, follow, or get out of the way, and how to discover when to do each. Lead: Draw the graphic per author defaults. Follow: a: Draw the graphic under the guidance of preferences (mode and global switch settings) acquired from the context (host program and OS). b. Adjust rendering per user input through interactive controls that the module itself supports. Get out of the way: a: Lead the 'comprehension of the XML text' phase, build a DOM, and leave the view extraction to the assistive technology. b. Pass through the XML untouched. There is no one "what to do for AT." There is a repertory of moves like the above the component should provide as capabilities. For a dumb, old screen magnifier, the mode is Lead(a) and the magnifier just does a bitmapped expansion. For a smart, new screen magnifier, the mode is Follow(a) because the screen magnifier will understand how to get out of the way and let the SVG engine do the magnification for this region. Screen magnifier should pass in the desired level of zoom and the SVG module pump up the drawing accordingly. For a screen reader, you are most likely in mode GetOutOfTheWay(a). Build a DOM and let the screen reader organize the view. ** discussion This question is placed squarely in the middle of the quote device independence unquote master plan. I thump the quotes because our cardinal principle is that it is the user who is in charge and the role of user preferences and interactive decisions is an integral part of one seamless adaptation strategy. But to discuss the answer to this question, on has to lay out the master plan for reference so the answer to this particular example makes sense. What derived views are adaptive is radically different for different disabilities, notably partial vision vs. no vision. The baseline technology is not expected to know the right view for each given user and AT combination. [later I will discuss how they can know somewhat] Even the person with no vision may use a spatial metaphor happily if they have a tablet or haptic mouse and the right sort of effects are laid on as styling to orient that space. The baseline technology may furnish an adaptable interface module of its own, but the thing it must do is to provide a documented data structure, documented adequately to support the derivation of new views by new software of which the base software doesn't know anything but the software interface it has to support so the new software can access the information. This is what is behind the emphasis in the XMLGL on models and documenting them meticulously. One can probably get into the right major mode of presentation by sensing the assistive technology population on the client system, or user settings that override the guess based on what is installed. The architecture should support knowing if there is a screen magnifier vs. a screen reader in use and may make inital adaptations to suit. But then the user is still in charge and can overcall all that. <http://lists.w3.org/Archives/Public/w3c-wai-ua/2001JulSep/0099.html>http: //lists.w3.org/Archives/Public/w3c-wai-ua/2001JulSep/0099.html If the SVG module supports a canned "dump to HTML" view extraction, it should work more or less like a DAISY digital talking book in the result. This is on two planes. A summary plane called the table of navigation which is a titlesOnly extract and a full-text version with navigation infrastructure around the full-text plane and to and from the summary plane. <http://www.loc.gov/nls/niso>http://www.loc.gov/nls/niso/ To support the more interactive dataBrowser mode of operation with an interactively controlled view definition (which is definitely on the shopping list), it makes more sense for the AT to be managing the view definition. The other two reference target delivery environments to see if your multimodal resource base is indeed robust are a pure voice dialog deliverable over a phone with no keypad and a silent, wordless interactive video game. If you can generate by standard algorithms those three versions of your service from the resource base including its machine-interpretable metastuff, you are probably "go, for three orbits." >I thought this would be a good place to discuss it. Perhaps, but there is too much afoot to do so without a may_I to the CG to explain the ramifications. We need to run this through the CG in due course to determine how to work on it. This is a working-group sized issue and is inseparable from the larger issues of what the deal is for sharing the screen. Al PS (what _is_ missing): The key function that is missing in SVG players, IMHO, is the intersection processing so that any svg:g element that defines a closed path creates a sensitive region, and the software can tell you if the cursor is in that 'g' or not at any time you care to ask. This would allow SVG to be the medium of add-on assistive 'touch-screen' functionality. The medium for re-programming response to pointer events. > >I think there ought to be some default rendering level for a linear >presentation - title and desc that are children of the SVG, or grandchildren. >Maybe title of elements that are descendants of those. Beyond that I think it >should be a navigable thing - increase detail, for example. Alternatively, >build it as nested content, and rely on people having a decent browser to >help them navigate at the right level of detail. > >The Batik guys have got tooltips available for titles, and I am not sure what >Amaya does for its text rendering but will test something in it. > >What do folks think? > >Chaals > >-- >Charles McCathieNevile <http://www.w3.org/People/Charles>http://www.w3.org/People/Charles phone: +61 409 134 136 >W3C Web Accessibility Initiative <http://www.w3.org/WAI>http://www.w3.org/WAI fax: +1 617 258 5999 >Location: 21 Mitchell street FOOTSCRAY Vic 3011, Australia >(or W3C INRIA, Route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex, France) >
Received on Friday, 7 September 2001 15:33:43 UTC