- From: Al Gilman <asgilman@iamdigex.net>
- Date: Sat, 28 Oct 2000 23:32:33 -0400
- To: Web Content Accessibility Guidelines <w3c-wai-gl@w3.org>
At 12:40 PM 2000-10-29 +1100, Jason White wrote: >> >Part 2 is what Kynn has referred to as a "backup scenario". What I fail to >grasp is why, in principle, the resultant interface, generated from >high-level abstractions through software, must be qualitatively inferior >to a custom-designed interface (for a specific modality or output device) >made available by the content developer. Thus I wouldn't regard scenario 2 >as in any way a second-rate solution, and it is quite possible to dispense >with scenario 1 entirely through leaving user interface construction >entirely outside the author's control; but the guidelines should not >restrict content developers in deciding whether they will offer 0, 1, 2 or >more interfaces in addition to their high-level markup and semantics, >their equivalents, etc. > Why is there an advantage to working small variations off a closely-spaced set of manually-composed bases, rather than reaching all forms by [relatively larger] transformations of a single common base? This is related to the fact that Web media are intrinsically semi-formal or partially-understood formats. Or the fact that our formal models of natural language are approximate, not complete and fluent. In natural language, the diction used in bullet lists is different from the diction used in narrative or oral presentation of the same set of ideas. The transformations are complex. The state of the art in natural language processing is not presently up to performing the transformation between these variants and appearing fluent in the output. This is similar to the state of automatic translation among natural languages. Different display and interaction spaces have their own idioms and optimizations that the artful author and designer follow, but nobody has reduced to complete and fluent rule sets. The art of writing captures and conveys more than does the science of grammar. Written representations of Boolean algebra can be transformed entirely automatically between visual graphics, linear typescript, and oral readout without a twinge of lost information or grace. But natural language is more chaotic than Boolean algebra. For most of Web content the sense is primarily conveyed by only slightly enriched natural language (counting verisimilitude in diagrams and images as natural language). To gracefully span a range of media as distant as HTML on the computer screen compared with VoxML on the phone, it is not enough to change what we have isolated in style languages as presentation properties. "The content" has to change, too. This is a message that I think I heard Daniel take away from the Bristol workshop. I hope I am not misquoting him. People with disabilities do put up with some pretty ugly transformations when the alternative is that it doesn't work at all. Commercial competition sets a higher standard for graceful results. So we should be glad that commercial interest is now being shown in the problem of how to serve the same information in diverse interaction spaces. The results of doing this intentionally should turn out better than what we can arrange as workarounds. I think that I would side with Jason a bit in saying that Kynn's claim that the single source strategy is inferior "in theory and in practice" is a little too strong. I am not sure we have adequate theory to demonstrate the theoretical inferiority of that approach. However, I am inclined to expect. with Kynn, that the alternative where the people make more of the transformation decisions manually up front will work out better in practice. Unless, of course, they think that they have thought of everything and nothing therefore has to be left flexible... Al
Received on Saturday, 28 October 2000 23:05:19 UTC