- From: William Loughborough <love26@gorge.net>
- Date: Wed, 01 Nov 2000 12:01:06 -0800
- To: Al Gilman <asgilman@iamdigex.net>, <w3c-wai-gl@w3.org>
At 02:24 PM 11/1/00 -0500, Al Gilman wrote:
>To get the content-sourcing community to understand the model, and to be
>able to map their message into this framework, you pretty much have to
>confront them with the
>divegent demands of different interface situations, and challenge them to
>do the compare-and-contrast across what they would present as customer
>interfaces in each.
When I was working on a touch tablet screen access system I took the advice
I now blithely give: "unplug the monitor". The experience of doing this for
a reasonably extended period changed everything.
I often watch blind guys do computer stuff using synthesizers, most
recently listening to Gregory doing his thing for several hours. I try to
get the "content-sourching community" to actually use these systems rather
than just trying to imagine what it's like. They don't do it. They think
they can "understand the model" but it must be experienced, not envisioned.
Essentially it's serial access in whatever guise. It's not as bad as
"reading through a straw" but it's more like that than we'd care to admit.
When I (blithely? dismissively?) considered the proposed schemes (to have
several docs instead of trying to have several views of one semantic
underpinning) to be pie-in-sky I wasn't referring to their feasibililty but
only to their "vaporness". The "concrete example" of a non-conformant bank
display vs. a functioning automated telephone "help" system didn't change
the issue.
Al's "With the best accessible site design and the best assistive
technology, you can make information retrieval eyes-free almost as usable
as a voice portal designed for use in audio from the ground up" is what I'm
claiming to be the case only I'd leave out the "almost" because to me
usability is inescapably linked to speed and any
"best...design...technology" will win the race hands down. Telephone menu
systems are and will for the forseeable future be absurdly slow and stupid.
As to the quote from Raman, here's one from his position paper for the Hong
Kong Device Independent Workshop:
Single authoring for a multiplicity of interfaces and deployment
environments necessarily involves addressing of issues of presentation
specific to each channel e.g., designing the look and feel for the visual
presentation, the sound and feel for the auditory representation. We
believe that a single authoring framework should allow these concerns to be
cleanly separated so that:
Content can be created and maintained without presentation concerns.
Presentation rules --including content transformations and style
sheets can be maintained for specific channels and deployment environments
without adversely affecting other aspects of the system.
Content and style can be independently maintained and revised.
His paper at http://www.w3.org/2000/09/Papers/IBM.html is worth reading in
context of this discussion as are many of the presentations at this and the
Bristol events. Although there was considerable concern that the "single
source" vision might be clouded, the idea that there can be an underlying
content/semantics with subsequent realizations via style/structure
manipulations is still IMO the prevalent view. There can be many views of
reality but we pretty much assume that despite "Rashomon" interpretations,
there is only one reality. Heisenberg/Godel to the contrary
notwithstanding, we pretty much agree that the train schedule is the train
schedule and the bank balance the bank balance.
To summarize all this multivergent stuff: the ideal of having one
semantically "complete" source object from which may be derived almost any
number of versions is still alive and well.
--
Love.
ACCESSIBILITY IS RIGHT - NOT PRIVILEGE
Received on Wednesday, 1 November 2000 14:59:55 UTC