- From: Al Gilman <asgilman@iamdigex.net>
- Date: Thu, 28 Dec 2000 10:47:00 -0500
- To: Nick Kew <nick@webthing.com>, Charles McCathieNevile <charles@w3.org>
- Cc: w3c-wai-er-ig@w3.org
At 11:14 AM 2000-12-28 +0000, Nick Kew wrote: >On Wed, 27 Dec 2000, Charles McCathieNevile wrote: > >> I propose that this discussion move to the ER mailing list entirely, unless >> anyone objects. So please make that the recipient of further posts if nobody >> screams... > >Aaargh!!! > >[ sorry ] > AG:: Double oops... The ER list Charles meant is this one that you are on. I believe he just meant you can drop the 'validator' list if what you want to talk about is validation as a mechanism to check accessibility. The ER Interest Group and Working Group never really accomplished cell division when we set them up in the first place and the official charter paper has now merged them. The <w3c-wai-er-ig@w3.org> mailing list is the communication channel being used. He did not and we do not want you to move off this list. > >> The question is, >> of course, what can be tested using SGML/XML validity type checking? > >That's a long answer: I think I'll put mine on the Web and open it for >discussion there, rather than post it in full. > AG:: That's an excellent tack to take. Let me give you a rough sketch of a possible "conventional wisdom" in this area. The presentation may have a bit of a negative bias with regard to the utility of what you are proposing to do; this is by plan in that I want you to understand your challenges: what we have to be shown to be sold on the utility or importance of some new technique we don't yet really understand. 1. Validation to DTDs, whether SGML or XML, is incapable of validating the full, graph structure of certain content. In a TABLE, a cell has two part:whole parents, a row and a column. DTDs capture only what goes with the syntax tree, with the exception of IDREF cross-links between nodes across the tree texture. This does not add up to a way to create via the DTD a validatable model of the cell to column relationship. As far as we know. 2. We know people have talked about architectures in SGML, we basically don't know what they are or what their capabilities. The XML community seem to be convinced that they are too confusing to be bothered with. This could be a smoke screen for other interests, of course, but the bottom line is: if what you think you can validate exceeds what can be done with a DTD because you are using SGML architectures, then we can't say much about what we think can and cannot be done this way. We just don't know; we really don't have a clue. 3. At the moment we are looking more at RDF and XML Schema as the way to capture the model that one can validate to, as opposed to SGML architectures or any other model capture or expression technique. This is because the bulk of the industry as reflected by the development directions of the W3C suggest that these techniques will be supported by merchant tools. We are just trying to work with what the industry is doing; we don't have the resources to invent the whole system from top to bottom. 4. It is likely that the most important accessibility checks are not expressible in a purely formal system. That is to say, a closed formal system where the validation can be done automatically by a machine alone, validating against a strictly formal model. Hypertext and contemporary Web content is intrinsically semi-formal in nature. Most of the information is in natural encodings that take human interpretation to be comprehended. A light veneer of [markup] machine-interpretable stuff is added to armor the natural content and make it survive machine processing in getting it between author and user in a communication path. Particularly in the structural area, it is critical that the markup not only create structure classes that the tools understand how to process but also that these structures be something that is bound to natural-content structures that the community of authors and users will understand and use in a consistent sense or connotation, like the way the senses of natural language words are preserved through a life cycle of oft-repeated use. So quality tools for the formal-to-informal semantic association are of the essence. Purely formal stuff can't touch this domain of inquiry, so far as we know, and hence have limited potential as far as access enhancement are concerned. The classic example here is whether the link content in a referring page is a good hint or preview of what you get if you activate the link and navigate to another page referred to. The connection is formally defined, but the validity is only observable by natural interpretation of the informal content of the html:a element and the full (including informal) content of the resource referenced in the html:a.href attribute. See also the following on this point: HCI Fundamentals and PWD Failure Modes <http://trace.wisc.edu/docs/ud4grid/#_Toc495220368>http://trace.wisc.edu/d ocs/ud4grid/#_Toc495220368 The evaluation strategy in this case is to devise formally-definable content transformations which take the content relationship on which the consumer with a disability depends, and pose it to the human author as a question, but in a mechanically transformed query which makes the specific question clear, easy to recognize, and hence not a burden to answer. The author presentation of the question is not necessarily or even best the presentation that the person with the disability uses. Just that the author presentation of the question must have a strong correlation with the actual dependency of the user on the information. This agreement is in the full-content semi-formal-language space, not in the formal subspace of validatable properties of the markup alone. Or so the theory goes. Again, the representative example has to do with how informative or orienting link text is. This is a question where by processing the site content, you can make a display of this question which makes it clear whether the link text itself gets the job done or not. This is the background of working assumptions in which I at least will be approaching this new technical option. Hope this rundown helps you in seeing where your opportunities to excel lie. >> In fact I think that better heuristics to generate warnings, and simpler >> approaches to the test which are relatively simple, are both going to be part >> of the solution for a good accessibility testing (and repairing) aid. > >Indeed. But there are well-developed and well-known heuristic-based tools >such as Bobby and Tidy. I'm not about to work on yet-another-one (unless >perhaps someone pays me to do so), as reinventing an old wheel holds >little interest to me. > >> As a final note, your opinion on whether it would be easy and / or useful to >> generate a machine-readable output format (the EDL / EARL project that the ER >> group is working on) would be interesting. > >Yes, I do envisage generating machine-readable output, though at what >level remains to be determined. If I'm invited to join the ER group, >- and perhaps if you suggest a URL for reading - I'll take a proper >look at your project. > AG:: one good start-reading URL: <http://www.w3.org/WAI/ER/IG/earl.html>http://www.w3.org/WAI/ER/IG/earl.html [Len, Wendy: would it be good to have a link from the group home page to this? Also update so December 4/5 meeting is no longer 'future'? ..but don't look at PF public page...] Have you familiarized yourself with the list archive at <http://lists.w3.org/Archives/Public/w3c-wai-er-ig/>? That is a good way to get oriented to what is more recent than the web site but prior to the mail you now get when you subscribe. Al >-- >Nick Kew >
Received on Thursday, 28 December 2000 10:43:19 UTC