- From: Al Gilman <asgilman@iamdigex.net>
- Date: Sun, 09 Apr 2000 23:54:17 -0400
- To: <w3c-wai-gl@w3.org>
At 02:25 PM 2000-04-09 -0500, Gregg Vanderheiden wrote: >Jonathan, > [and then later] >The questions are directed to Jonathan since they come off of his comments >but are general questions I am wrestling with and can be addressed by >anyone. I think we need to address questions like these in order to move >forward in a concrete way. > [...] AG:: I'm going to take a swing at some of these. >Some questions about the WAI home page assessment and writing pages for CD >consumption in general. > >Some of the suggestions would seem to make it more accessible to people with >CD but at the expense of others. Your examples of having only a few words >on a page. Dropping navigation bars that at least I find very helpful >when I come to a page. (In fact on most sites I see if I can find what I >want in the navigation bars first and only begin plowing through the text if >I can't find a shortcut to where I want to go). > >For most access measure we recommend, we don't change the format that >everyone would see but only recommend alternatives that could be viewed by >people who have trouble with the original form. Most of your >recommendations seem to recommend changing the way everyone would see a page >so that it would work for people with CD. This often makes the primary >presentation of the page less usable for others - perhaps for all. > >[ NOTE: this does not include the ideas that Jonathan pointed out that we >all would agree would make the page better for everyone. > >1) Do you (does anyone) have any ideas for how to make a page more >accessible in a way that doesn't change the presentation for others? (like >ALT text, long desc, closed captions, alternate OBJECT content etc.) AG:: Dynamically generating the pages. See long thread from Scott For a rough idea, make this thought experiment. Consider the web clippings that are being sent to Palm VII devices. Blow them up to desktop screen size with a stylesheet rich is suggestive imagery. The New York Times is publishing the same news, but you can read it in small bites or large depending on your preferences. Most users will want a navigation aid such as the left side bar or talking book "navigation center" close at hand. In audio, however, you want it somewhere you can navigate to, not continually playing in the background. On a many-pixel screen with normal vision, one wants it right there sharing screen real estate with the current topic. But I suspect that low vision users would be like the web clippings consumer in wanting this in a separate timeslice and not concurrently displayed. So, the basic answer is, there is no "everyone else" who see things one way. The way one sees it is different based on equipment, vision, and tolerance for complexity. And the difference includes the dimension of "how much of the story you see at any one time." Just as it is helpful to duck the background music out of the way if one is inserting audio descriptions, or kill wallpaper when in high contrast mode, the navbar should be prepared to duck out of the way if the customer needs a very simple panorama in order to focus on the focus. Thus the neighborhood of the current point where one is in the browse process will have more or less context outlined in the periphery of the current display depending on whether the display is audio, braille, low vision, and evidently for complexity challenged persons as well. If a person has trouble keeping their place in a complex display, we can try to make it more usable by zooming in on the question immediately at hand. This is the point. Zoom here is a view control, not a change in the content of the message. In my mind this leads straight back to where to we account the "resource" that the person has a right to access. In the first incarnation of the WWW, the resource was a bunch of HTML files archived in a file system and passed in HTTP verbatim as retained. Nowadays, what is retained is often in a database, and what is passed in the HTTP is an on-the-fly report generated not only by the HTTP GET query from the user but some logic based on their clickstream history on the server and the queue of billable ads to be exposed to visitors. At least for these dynamic-page services, it is clear that we can view the total resource as a network of information nodes and serve large or small chunks of it to different people without violating the principle that we are affording access to the same information to all. > >2) If breaking the pages up and putting alternate graphic presentation >would make them more accessible to CD - would you recommend that all the >pages on the web (or the WAI site) be done this way? > AG:: I would not recommend that all our pages be reduced to thirty words each. On the other hand, most of our pages would be improved if they were edited in the direction pointed by Jonathan's comments. For everyone. >3) Would putting graphics on the WAI pages make any of that information >really accessible to people with CD? If so how serious a CD (or what >types) could a person have and still understand the concepts presented on >most of the WAI pages. (not a couple fundamental ideas like "this group is >helping to make web or Internet pages easier to use for people with all >disabilities" but the vast majority of the information presented. ) > AG:: Multiple thoughts from different angles: I think that there is a lot of information that the W3C site should be serving which is highly technical and we don't know how to recast it in visual metaphor. Then again, there is research such as what is shown at <<http://www.anu.edu.au/pad/reporter/tolmie.html>http://www.anu.edu.au/pad/ reporter/tolmie.html> on representing systems of mathematical knowledge in visual imagery. Had this been applied to the ongoing work of the XML Working Group, we might have spared ourselves much agony. Partly, while it would be splendid for the WAI home page to be a portal for all people with disabilities onto the World Wide Web, I don't believe that is its present prime mission and I am not sure we can mobilize the resources to make it serve that purpose. On the other hand, if anyone were to put together a research portal targeted to experimenting with views adaptive for persons with cognitive disabilities, similar to what Silas Brown did for vision impaired consumers, I believe that the WAI and the WCAG would be the richer for it. So this definitely is a goal with which the proposed WAI Research Group would have a sympathetic interest. Perhaps as an application of the Netomat engine, or some competing view control technology. >4) How many of the people (types not count) who could figure this >information out from symbols would not understand it if it were read to them >aloud? (or signed to them... we will soon have text to sign language >software). > - which types of CD ? > - which types of information (specifically) on the site could they >understand in pictures but not in speech? (or sign?) > >(You can answer the speech and sign language questions separately since text >to sign is not yet here) > >The questions are directed to Jonathan since they come off of his comments >but are general questions I am wrestling with and can be addressed by >anyone. I think we need to address questions like these in order to move >forward in a concrete way. > AG:: Part of my problem in responding to this thread has to do with a suspicion that there is a mismatch between problems and solutions. If we modify the WAI website to make it more like Jonathan suggests, I think we will help ourselves, but I don't think we will help him much. I don't think we can turn the WAI website into something that will contribute materially to his teaching or his students' learning. I fear it would be a largely symbolic victory. I suspect for the students we want to get them communicating with the aid of the computers and adding value to their life by sharing information that way. This probably starts with WebPhone. Then motivate the role of record data (not written language, at first, but later) as it helps with this chat and collaboration side of their life. Then gradually introduce record forms (voicemail) of human communication, and from this gradually wean them from verbatim audio and personal icons for phone lists to more orthodox encodings of language by perhaps a book-reader applied to flash cards. Whatever language is most accessible and motivated in their daily life. Not arbitrary blather by blokes on the web. Place a web phone call using a directory at a special Ed. gateway server that has personal icons in the directory and not just text. Then learn to make a personal phone list by copying objects. Maybe even give each student a flashcard so they get the idea that their phonelist is on the computer if and only if they let the computer have their card. This way they get to a positive idea of record data in an object-oriented fashion before we get into hard things like complex codes. Also, as a note, I suspect that from the standpoint of the learning objectives of the students, it may make more sense to introduce speech-to-text into their experience before text-to-speech. As I recall, the classic way to teach writing was to get the student to draw a picture and tell the story of what is in the picture. Then the teacher writes down the story and mounts it with the picture. If the leap to the idea that these squiggles on the page could be related to human utterance is a high hurdle, best to attack it first with an utterance the student really relates to -- like what they said. This is just a wild guess. The team at CAST would have much more and better clues than this. Al >Thanks > >Gregg > >-- ------------------------------ >Gregg C Vanderheiden Ph.D. >Professor - Human Factors >Dept of Ind. Engr. - U of Wis. >Director - Trace R & D Center >Gv@trace.wisc.edu, <http://trace.wisc.edu/>http://trace.wisc.edu/ >FAX 608/262-8848 >For a list of our listserves send "lists" to listproc@trace.wisc.edu >
Received on Sunday, 9 April 2000 23:52:46 UTC