- From: Jason White <jasonw@ariel.ucs.unimelb.edu.au>
- Date: Sun, 29 Oct 2000 14:01:25 +1100 (EST)
- To: Web Content Accessibility Guidelines <w3c-wai-gl@w3.org>
Another reason for ensuring that the data model, as opposed to the final interface, is available for processing by software under the user's control is that given Len's groups of users (with different needs, different devices, preferences etc., namely {U1, U2, U3 ... Un} we know empirically that n, even if we could agree upon a means of quantifying it, is very large. Now consider a number of parallel, but different, user interfaces designed by the creator of the content, each of which presents the same content but in a distinct way: (I shall preserve Len's symbolism, though for the sake of generality I refer to them as user interfaces rather than web sites): {S1, S2, S3 ... Sm} For each interface I (an element of S) there corresponds a subset C of U such that I satisfies the profile of the needs and preferences of those u that comprise C. . Now the quasi-formalistic treatment starts to break down and we introduce a number of further empirical observations. Most importantly, if each element I of S is very concrete and specific (for example it consists of rendered content) then the relation between S and U defined above becomes one-to-one, or nearly so (mathematicians reading this will obviously decry the lack of precision here)--the idea is that each subset C of U, the needs/preferences of which is satisfied by each interface I, has one element, or at most a very small number of elements. By contrast, if the author provides high-level abstractions which can be transformed, by software operating under the user's control, into a multiplicity of distinct interfaces, then a potentially large set of interfaces can be generated automatically, thereby satisfying the needs and preferences of many, but perhaps not all, members of U. Of course, this "interface generation" capability could reside in software operating under the author's control, but if this is the case, then (to achieve the same result as when the interface is generated by the user's software) the author's server-side software will need to be able to generate as many elements of the set of interfaces satisfying U, as do the various user agents available to members of U. This is impracticable because n is large, and because expertise in some of the required interfaces (for example, tactual or speech-based interfaces) is likely to be concentrated in user agent rather than server developers. What the 2.0 guidelines do is to require the higher-level semantics, but to permit content developers to satisfy some subset of U by (author-specified) interfaces comprising the set S. Each u (a member of U) has the choice of either selecting an element of S, or of creating her/his own interface I given the higher-level semantics and structure available from the content developer's server, via style sheets and other mechanisms. Conceptually, we may consider the higher-level semantic option to be a member of S, which, unlike any other author-supplied interface I (elements of S) satisfies a large subset of U, including most, if not all, of those u (elements of U) that aren't satisfied by the remaining elements of S. That is why we mandate the semantically rich option, but allow authors to supply other interfaces I which are elements of S. I apologize in advance for any mistakes in the quasi-mathematical symbolism which is here being used only as a metaphor. The usual disclaimer applies.
Received on Saturday, 28 October 2000 23:01:31 UTC