Re: Keyboard Navigation For Document Exploration In SVG 1.2

Hi Dave;

For a green square you could have something like:
Line: 10, 10 20, 10 - color: green
line: 10, 20, 20, 20 - color: green
line: 10, 10, 10, 20 - color: green
line: 20, 10, 20, 20 - color: green

Admittedly, it's a rather large set of attributes to process in order to
work out it's a green square.  You would have to use something like the V
Buffer to display them all, as according to George Miller's 1956 theory on
short term memory, we can only remember between five and nine chunks of
information in our short term memory.  So, whilst it's accessible, as people
can determine it's a gree square, it's not very usable, requiring an
increased cognitive workload to determine the spatial relationships between
the lines in order to determine the shape as being a square.

However, if we look beyond speech as an output modality, it becomes a lot
easier.  As you're probably aware from my ASVS project, and to some extent,
from the work of Evreinov and Meijer, you can display shapes using sound
pixels.  The basic concept is that you replace visual pixels of light with
auditory pixels of sound.  This just changes the communications channel used
to convey the information, but will still allow the same perceptual
techniques, such as the Gestalt laws of perception, to be applied to the
auditory rendering as would be applied to the visual.  So, you could
determine that something was a line by having pixels grouped together with
the same horizontal alignment for vertical lines, and the same vertical
alignment for horizontal lines.  Then by shothe lines in parallel, it
becomes a lot easier and quicker to examine the spatial relationships
between the lines and determine it's a square.  Most attributes, such as
font size, bold, italic, etc. are just differences in spatial relationship,
and having this parallelism makes for easier determination of spatial
relationships.

As for color, well we usually have a hearing range from 20Hz to 20KHz.
We're capable of detecting changes in tonal frequency at around the 10Hz to
15Hz mark, and so that would give us around 1300 to 2000 different states
that could be signified through changes in frequency.  Build in mechanisms
for zooming to overcome the differences in auditory definition compared to
visual definition, and something to simulate saccade movements, and you've
got an auditory equivalent to visual output, well, what you actually have is
the current thinking on my ASVS project *smile*, but it looks to work in
theory *gsmile*.

So, I think that if we look beyond speech, having the user extract the
semantic meaning becomes a lot more usable.  I noticed from Al's draft
agenda for this week's PF, that tactile graphics are on there.  Maybe this
agenda item could be rescoped to include sonic graphics, although being
output it's probably more in the domain of ua.

Will
----- Original Message ----- 
From: "david poehlman" <david.poehlman@handsontechnologeyes.com>
To: "Will Pearson" <will-pearson@tiscali.co.uk>
Cc: "Protocolls and formats" <w3c-wai-pf@w3.org>
Sent: Sunday, November 28, 2004 4:53 PM
Subject: Re: Keyboard Navigation For Document Exploration In SVG 1.2


> Will,  For most people, the leap is too hard to make.  Can you send us an
> example of what we'd have to extract meaning out of in text form?  The
> entire discussion to this point follows:
>
> Johnnie Apple Seed
>

Received on Monday, 29 November 2004 10:45:14 UTC