Re: Accessing Graphics

At 9:17 PM +0000 12/6/04, Will Pearson wrote:
>Hi;
>
>Diagrams are a prevalent form of communication in contemporary 
>society.  They are used to explain task sequences, convey concepts, 
>even design interactions between classes in a UML sequence diagram, 
>yet they remain one of the last frontiers in the world of accessible 
>information.  This need not necessarily be the case, as after all, 
>diagrams are just a transport mechanism for meaning as much as the 
>words on this page are.

But diagrams are frequently used when the plot of the story to be 
told does not follow a single, linear
thread or loop-free topic tree.

So accessing a linear narrative, or a linear-plus-grouping 
book-structured treatise does not give us all
the precedents we need to deal with access to diagrams.

We are familiar with a variety of navigation modes from 
book-structured documents:

- serially through the whole thing in full detail

- serially through action opportunities via tabbing

- with seven league boots through titles of sub-topics of the current
topic as in the DAISY table of navigation

- table navigation, up/down left/right inside a regular grid of
repetitive cells

I think that in accessing diagrams we need to recognize that there
are arcs linking the objects in the scene. Some of the objects are
diagrammatically presented as connected or related, while other pairs
are not.

So there is a new sub-function involved in what I call "graph
navigation" which is that there need to be facile means to discover
and navigate to the strongly related objects, based on the currently
focussed object.


>There's three main ways in which I consider diagrams can be made 
>accessible.  Each involves extracting the meaning from it's 
>diagramatical encoding, but each differ in where that decoding takes 
>place.
>
>A bit of communications theory.
>There's various communications models used to explain how people 
>communicate with one another and technical communications systems. 
>One of these, which is classed as a transmission model of 
>communication, is Claude Shannon's 1948 model that featured in the 
>Bell Systems Journal of that year.  Shannon proposed that there were 
>five stages to communication:
>1. A sender considers the meaning to be sent
>2. That meaning is encoded into some physical form
>3. The physical representation of the meaning is transmitted to a 
>receiver, using physical communication channels
>4. The physical representation is decoded to expose the transmitted meaning
>5. The receiver then absorbs the transferred meaning

This is fundamental.

We have this cycle now more integrated in the WAI public messaging. 
At least a toehold.  See

http://www.w3.org/WAI/intro/components

But we still have to follow through on this principle.

In WCAG, for example, it has to be clear that the
author-through-server are responsible for delivering something that
enables the user's control of presentation through the application of
user-configurable transforms in the User Agent.

>If we apply this to diagrams, the lines, colors, spatial 
>relationships are purely encoding, and are distinct and separate 
>from the meaning they convey. 

That's where I fall off the track. The use of 'purely.' Even in the
business to business world of electronic document interchange,
there's no purity to the encoding. And the encoding there is more
consistent, more strongly controlled, than in the bulk of the
X-to-consumer Web.

The encoding is the relation or mapping between percepts and
concepts. The percepts are not pure anything. One perceives what one
conceptually expects. It's all part of a coupled, recursive process
whether on the speaker's side emitting the 'communication act' or on
the hearer's side building an estimate of what the speaker was
thinking.

>Therefore, to get at the meaning, all that needs to be done is to 
>decode the physical representation of that meaning.

Once again, this exaggerates the precision of encoding in human
communication. Natural communication is full of what we call
allusion. You could call it fuzzy encoding.

While on the one hand, I have been saying that we need to make the
cycle Will described above the backbone of our model of Web
communication, and present accessibility in that context, we still
need to go a step further to recognize that web content is
semi-formal. By this I mean that there are strict models for some
aspects of what is being conveyed, but not alll aspects.

A key plot point is that in Web communication there is a transform
being done at the client side from the wire format to the physical
presentation (and event acceptance) form. It is in many cases easier
for the user to control this transformation being done at the client
side than to reach our and perturb what is being done on the server.
[But not always]

>It's this decoding that causes problems in accessibility. 
>Psychology examines the process of receiving meaning in a bit more 
>detail.  According to psychology, we first receive sensory stimuli, 
>which can be in the form of waves, particles or contact with other 
>physical objects.  We then automatically group these into perceptual 
>groups, which in the world of diagrams would be the lines, shapes, 
>colors, words, etc. that form a diagram.  The final stage in this 
>process is for us to cognitively associate meaning with those 
>perceptual groups.

When the application is mailing the baby's picture to a grandparent,
most of the message is in the image; the concept that this is their
grandchild is part, but the smaller part.

But in diagrams, the message is symbolic and we have lots of ways to
represent or interactively browse said message.

>Examining the psychological process of receiving information, 
>there's two main problematic areas for accessibility.  Either people 
>can't receive the sensory stimuli due to physical, environmental or 
>other constraints, or they cannot cognitively associate meaning with 
>the perceptual groupings, which may be due to one of a number of 
>factors.

FWIW my recounting of this tale is at

http://trace.wisc.edu/docs/ud4grid/#_Toc495220368

>Semantics can resolve both of these issues.  If it's embedded as 
>part of the physical transportation medium, then the transferred 
>meaning can be reassembled in any form suitable for the user.  This 
>could be a form that bypasses problems, be they physical, 
>environmental or whatever in nature, that prevent the user from 
>receiving sensory stimuli, or it could be a form adapted to allow 
>the user to cognitively associate meaning with the perceptual 
>groups, where they may have been unable to with the original 
>perceptual groupings.  Most people are familiar with the fact that 
>some people cannot receive certain types of sensory stimuli, the 
>blind cannot receive light waves, the deaf sound waves, and so on, 
>or it may be in appropriate for people to receive certain types of 
>stimuli, well, you need to look where you're going when walking, you 
>may fall down some steps.  However, accessibility goes further than 
>just dealing with issues of disability, be it permenant or physical, 
>the ultimate aim of accessibility is to ensure everyone can access 
>meaning.  This includes adapting the encoding of the physical 
>representation, but not the type of stimuli used to encode it.  For 
>example, a blue line would yield no meaning to someone unfamiliar 
>with the UK's Ordenance Survey 1:50000 maps, yet it represents a 
>motorway.  This is because they haven't learnt the particular set of 
>symbolic encodings used in an OS 1:50000 sheet.  Through the use of 
>semantic content adaptation barriers such as this lack of knowledge 
>of various symbolic encoding sets can be overcome.
>
>Semantic content need not necessarily be encoded in the physically 
>transported content, it can be gained after transportation.  The 
>final stage of the psychology sequence involves associating 
>cognitive meaning with sensory stimuli, or in other words, 
>extracting the semantics from the content.  This process can be 
>automated by intelligent agent software that have been taught the 
>encoding techniques used in a particular diagramatical context, and 
>this set of extracted semantics can then be reencoded as if the 
>semantics were originally embedded within the transported physical 
>representation.
>
>Finally, and to me the most fun, as I've been working on this in 
>industry, is adaptation of the sensory stimuli itself.  This is only 
>suitable for those unable to receive the sensory stimuli for 
>whatever reason, and involves converting diagrams and images into a 
>form of sensory stimuli that the intended receiver can receive. 

Try ... converting a web dialog containing diagrams into an alternate 
dialog that communicates...

A lot of what I have had to say about access to problematic 
presentations, both transit timetables and tax-preparation 
flowcharts, has focused on what is known in the trade as 'equivalent 
facilitation.'  Using
interaction as a resource to eliminate the need for the presentation 
of an acyclic graph as essential
to the task at hand.  Specifically, getting the user to input where 
they want to go, and from where,
means that the server can present a short list of route plans each of 
which is a linear story, rather than
a route map or a timetable that takes a lot of skill to navigate. 
Also the flowchart was an inferior way
to explain the logical flow through the preparation of a tax form; 
whereas activating the individual decision
questions with hyperlinks provided a superior explanation.  The 
availablility of active navigation eliminated
the need for flowlines in the graphic.  The story unfolds through a 
dialog rather than by reconstructing
it tracing a path through the diagram.

http://www.w3.org/WAI/RD/2004/06/28-agenda.html

But this thread should be about access to the diagrams, with a brief 
nod to the alternative dialogs for the
cases where the authoring side should go there first.

We should review the Bulatov work, where it gets us and what needs to 
be done next.

http://www.svgopen.org/2004/papers/SVGOpen2004MakingGraphicsAccessible/


>However, as this is intended more for a protocols and formats 
>audience and not an ATIA one, I'll leave the detail out for this.
>
>Will

Note:

I would draw a distinction between 'diagrams' and 'graphics.'

'Diagram' connotes something symbolic, where there is a fairly high
level of abstraction in the presentation of the objects in the scene.
'Graphic' has the opposite spin, it focuses on the presentation
details in the pixel plane. There is an intersection, but for our
purposes it is important to distinguish 'diagrams' from general
graphics. In fact this is one of our problems in dealing with
accessibility and SVG. The rules for diagrams are pretty strong, but
if we try to say that all SVG drawings should be that model-based, we
will have eliminated enough of the market for SVG that this could be
a fatal blow.

Received on Wednesday, 8 December 2004 15:59:56 UTC