RE: Discussion of DOM with Glen Gorden of Henter-Joyce (A)

Please read my comments in between marked with ***

>>> Charles Oppermann <chuckop@microsoft.com>
04-02-99 21.00 >>>
This kind of ISV activity is excellent and sorely needed in this
working
group.  Hopefully the Working Group can get the input of
accessibility aid
vendors that do not have off-screen models to read
information off the
screen.  Products such as voice-input, switch and scanning
systems and
learning disabled tools all need information about the
document and the user
interface it provides.  These products cannot read
information off the
screen and must get the data programmatically.

Some comments...

<<
JRG 1. From a technology standpoint would you rather rely
primarily on
active accessibility objects for knowing about information on
the screen or
use something like DOM which is more a model of underlying
content of what
is being rendered?

GG: DOM!
>>

Unfortunately, this question shows a lack of knowledge of
what Active
Accessibility provides and unnecessarily limits the scope of
the technology.

If I was asked that question, and didn't know better, I'd have
the same
exact response!

Active Accessibility is not just a screen based model, current
versions
expose the structure of HTML documents, not just what is
present on the
screen.  Active Accessibility is not limited to HTML, like DOM
is.  It's a
interface to interact with the user interface as a whole.

*** MSAA is just one DOM and it is screen based.

Glen's response is different that what he's told me personally
is separate
discussions, but that's probably because of the limited scope
of the
question.

Also, it's my position that if the vendor is willing to spend
more time and
special-case HTML, then more power to them!  A native
object model, be it
DOM, Word's, Excel's, or Visio's is much richer than the
generic interface
provided by Active Accessibility.  Often times though, native
object models
are not designed with the needs of accessibility aid vendors. 
We improved
Microsoft Office 2000's various object models to provide
more information to
accessibility aids.

<<
JRG 2. Would it be useful to for you to provide your own
speech or Braille
rendering using DOM with a synchronized visual highlighting
of what you
were rendering on the visual display?  
This would allow sighted colleagues to see where information
is being
spoken is coming from on the screen.

GG: This would be an interesting thing to experiment with
and offers
promise.
>>

Several companies already do this with alternative input
software and
learning disabled products using Microsoft technologies.

*** Everybody should do what he does best (e.g.
IE/netscape do visual rendering, an audio browser does
audio rendering) and not need to redo what others already
did.

<<
JRG 3. Do you see as one of the main weaknesses of the
current DOM the lack
of visual display positional rendering information?

GG: Yes!
>>

This is not an issue with Active Accessibility and is also
available in the
Dynamic HTML object model.  Hopefully the DOM WG will
adopt it.

*** rephrasing the question like this shows absurdity:
JRG 3b. Do you see as one of the main weaknesses of the
current DOM the lack of audio rendering information?
*** visual rendering information could enrich any DOM, but
so could any other specialized information. We are so visually
oriented ...

<<
JRG 4. Are there other features that would make DOM useful
for your
purposes?

GG: None come immediately to mind.
>>

A feature that Glen's product doesn't currently need, but
others might find
useful is the ability to get a element based on a X/Y
coordinate.  Used for
hit testing.  This is vital for screen readers that use the
content of the
DOM - not just manipulate it for a different visual display.

*** oke, we SCREEN-readers need to know coordinates. But
we also need the seperation between structure, form and
contents and we also need a standard. 


Charles Oppermann
Program Manager, Accessibility and Disabilities Group,
Microsoft Corporation
mailto:chuckop@microsoft.com 
http://www.microsoft.com/enable/
"A computer on every desk and in every home, usable by
everyone!"

-----Original Message-----
From: Jon Gunderson [mailto:jongund@staff.uiuc.edu]
Sent: Thursday, February 04, 1999 10:43 AM
To: w3c-wai-ua@w3.org
Subject: Discussion of DOM with Glen Gorden of Henter-Joyce


My questions are marked with JRG and responses from Glen
Gorden are marked
with GG .  Glen is a developer at Henter-Joyce.

JRG: From your perspective, ideally what type of interface do
you want to
use to get information about a WWW document.  Could you
answer these
following questions.

JRG 1. From a technology standpoint would you rather rely
primarily on
active accessibility objects for knowing about information on
the screen or
use something like DOM which is more a model of underlying
content of what
is being rendered?

GG: DOM!

JRG 2. Would it be useful to for you to provide your own
speech or Braille
rendering using DOM with a synchronized visual highlighting
of what you
were rendering on the visual display?  
This would allow sighted colleagues to see where information
is being
spoken is coming from on the screen.

GG: This would be an interesting thing to experiment with
and offers
promise.

JRG 3. Do you see as one of the main weaknesses of the
current DOM the lack
of visual display positional rendering information?

GG: Yes!

JRG 4. Are there other features that would make DOM useful
for your
purposes?

GG: None come immediately to mind.

End of quesitons and answers.

Jon Gunderson, Ph.D., ATP
Coordinator of Assistive Communication and Information
Technology
Division of Rehabilitation - Education Services
University of Illinois at Urbana/Champaign
1207 S. Oak Street
Champaign, IL 61820

Voice: 217-244-5870
Fax: 217-333-0248
E-mail: jongund@uiuc.edu
WWW:	http://www.staff.uiuc.edu/~jongund
	http://www.als.uiuc.edu/InfoTechAccess

Received on Friday, 5 February 1999 03:58:20 UTC