Re: Requirements for Accessibility Description Language (ADL?)

Probably what you talked about on the telecon is a subset of the
requirements I
see.

We seem to need descriptive language

a. which lets us talk about things that happen at different interfaces, at
least two different interfaces.   We need to be able both to describe what is
happening at each interface in its own native phenomenology, and to relate
what
is happening at the two interfaces in terms of patterns of correlation which
describe machine-implementable processing and the standard interpretation of
markup expressed as constraints on that processing.  The two interfaces of
most
immediate concern would be the immediate human-with-computer interface, and
the
interface described by the DOM, or any similar published reference interface
within the computational model of the activities taking place within the
computer between a data structure reconstituted from the string/stream form of
some information [e.g. DOM interface to an XML encodable document, etc.]
For a
reference discussion of the interaction world, centered on the
human-with-computer interface, see also 

 HCI Fundamentals and PWD Failure Modes
 <http://trace.wisc.edu/docs/ud4grid/#_Toc495220368>http://trace.wisc.edu/d
ocs/ud4grid/#_Toc495220368

Note that both the web content group and the user agent group have at times
tended to obsess on one or another of these two interfaces, and obscure the
distinction between them as though the two were automatically synonymous.  But
they are not synonymous, and the relationship between them is the subject of
user agent processing and format specification requirements on user agent
processing.  In order to relate the protocols and formats work on format
specifications in our overall descriptive capability, it is important to look
at user agent processing as format specifications as bridging these two
interfaces, and imposing some required correlation between them.  It never
gets
to the point that what happens at the other interface is bit-for-bit
reproduceable and automatic across all implementations, however.

b. which allows us to talk both about documents, that is to say what partial
reports of the web of information are communicated as a unit, and about
dialogs
or click streams, the sequences of small interactions which collectively
accomplish some recognizable purpose.

At 05:01 PM 2000-11-10 -0500, Leonard R. Kasday wrote:
>This is a first list of questions re  requirements for an "accessibility 
>description language", based on the discussion in the joint ER/AU meeting [1]
>
>This can be an outline for item 2 of our Monday telecon [2]
>
>- what shall we call this?  How about "Accessibility description Language" 
>(ADL).
>
>To which of the following should it apply?
>
>  XHTML
>  Any XML application, e.g. SVG?
>  HTML
>  HTML with syntax errors (what severity of errors can be allowed?)
>  CSS
>  ECMA Script
>  Other programming languages
>  CDATA
>

The language should cover HCI and the software encoding of HCI such that any
required rules concerning structures and attributes within the HCI is
machine-interpretable in the encoded form.

>What level should description point to?  Characters? Tokens? Tags?  ("tags" 
>is html and xml specific... what about other parsable languages?)

The units in which the user perceives the interaction, and the units in which
the software interface manages the bits and references in the DOM or similar
API.

>
>Shall we include application testing specifications, e.g. a description of 
>steps
>   activating a link
>   filling in a form
>   submitting a form
>in addition to accessibility in the result?

Long-winded spelling of 'yes':  Since many accessibility failures can be
traced
to violations of the usability rule "the response of the system should be
predictable by the user" it becomes necessary to have enough dynamic
description capability so that patterns of interaction which are and are not
predictable can be described and their discriminant patterns (how to tell
which
of them you have in front of you) articulated.

My answer is 'yes' in spirit, but note that I would object to the "in addition
to" wording.  This stuff is part of accessibility, not in addition to
accessibility.  Accessibility is not limited to static constraints that can be
evaluated in the context of a single page 'document.'

>
>Include summary statistics in output?
>
>Combine results of different tools?  Retain what tool said what?
>
>Include history of what was checked (e.g. "this alt text is OK according to 
>a human")

Yes.  And the reference model for the information will have an optional slot
here for who, or what kind of a who, the observer was that made this
determination.

>
>How robust in face of changed source?  I.e. if source changes, how much of 
>previous description can carry over?

Observation data is generally with regard to a precise reference that does not
refer to the document if it has undergone substantive change [modulo comments
about canonicalization to insulate against transport-induced variability that
is not substantive in its effect].
>
>How scalable? Just pages?  Whole site?  Results of several independed 
>processes running on site?
>

Boolean and arithmetic rollups should be supported in what one can say.  This
is transparent.  The point is that the rollup methods operate in standard math
domains of boolean and numerical arithmetic.

See the work of Graham Klyne cited in the IETF CONNEG archives on logical
derived attributes or pattern expression s involving multiple properties of
multiple entities.

Note, however, that Graham is working in CC/PP in RDF at the moment without
necessarily using all this algebraic structure.

>What tools will read output?
>    - evaluation tools
>    - authoring tools
>
>Is output useful for user agents?

No, the description language should be usable for describing repair methods
employed in User Agents.  The data reduction that one uses in feeding back to
the author is not the same as what one would use to feed forward to the user. 
The trap that is used to tell the author there may be a problem is stricter
than the trap that triggers repair in the user agent.  On the other hand, the
filter that develops hints to the author as to what might be an appropriate
[e.g. ALT] value to include is looser, more permissive, than is a repair
filter
that can be used in a User Agent where the result is going to be all that the
user has; the author never got to look at it before it goes to the user.  But
all these filters should be describable in a common language.

The connection here is that the logical basis for any attribute such as "fails
AERT n.m.x.y" should be on our worklist to define in a machine-interpretable
way.  This should (within the framework of the descriptive language, if not in
the initially published material) be expandable through the workings of the
descriptive language into a method specification that if a program were to
execute this method, it would come to the same conclusion that we had
annotated.

>
>What list of checkpoints to point to?  Where maintained?

You don't just reference human-interpretable rule descriptions.  As described
above, there needs to be in the language the capability to refer to
machine-interpretable filter rule specs.

>
>files could be distributed in several files... in one file...  so human 
>readable
>
>What variations in input cause no change in  pointers?
>    added white space? Case?  attribute order?

This is, unfortunately, probably not something that is fit to try to define
once and for all in the formal grammar of the descriptive language.  It is
more
like don't-care equivalence filters are libraries that are used in describing
media.  And a class of knowledge that grows over time within the
application of
the language.  So we need some sort of a space-quotient or "modulo" expression
capability in the language.

Like the usable color information is the 24-bit encoded RGB color modulo the
coarser of the resolution supported by the video board or the user's eyes.



>
>[1]
<http://lists.w3.org/Archives/Public/w3c-wai-er-ig/2000Nov/0015.html>http://
lists.w3.org/Archives/Public/w3c-wai-er-ig/2000Nov/0015.html
>[2]
<http://lists.w3.org/Archives/Public/w3c-wai-er-ig/2000Nov/0017.html>http://
lists.w3.org/Archives/Public/w3c-wai-er-ig/2000Nov/0017.html
>--
>Leonard R. Kasday, Ph.D.
>Institute on Disabilities/UAP and Dept. of Electrical Engineering at Temple 
>University
>(215) 204-2247 (voice)                 (800) 750-7428 (TTY)
><http://astro.temple.edu/~kasday>http://astro.temple.edu/~kasday        
<mailto:kasday@acm.org>mailto:kasday@acm.org
>
>Chair, W3C Web Accessibility Initiative Evaluation and Repair Tools Group
><http://www.w3.org/WAI/ER/IG/>http://www.w3.org/WAI/ER/IG/
>
>The WAVE web page accessibility evaluation assistant: 
><http://www.temple.edu/inst_disabilities/piat/wave/>http://www.temple.edu/
inst_disabilities/piat/wave/
>  

Received on Saturday, 11 November 2000 11:18:02 UTC