W3C home > Mailing lists > Public > www-dom@w3.org > October to December 2001

Re: Enumeration of EventListeners in DOM Level 3 Events

From: Curt Arnold <carnold@houston.rr.com>
Date: Tue, 18 Dec 2001 14:05:03 -0600
Message-ID: <006201c187ff$485959b0$7600a8c0@CurtMicron>
To: "Richard Schwerdtfeger" <schwer@us.ibm.com>
Cc: <www-dom@w3.org>
From reading the conference call log, it seems like Ray and Philippe are
getting all the right people together and share my issues with the current
working draft.

Comments inline:

> >So, say you had an SVG drawing embedded in an HTML page that connected
> mouse
> >over in the drawing events with a Java applet and a .NET applet, you'd
> >expect one accessibility tool to figure out what type of mouse movement
> you
> >would need to fake at bring up a specific dialog embedded in one of the
> >applets.
> My primary concern is JavaScript. JavaScript is probably the single most
> frustrating problem for disabled users today and it is on almost every web
> page you go to. The scenario you are talking about is so remote I can't
> fathom why you would want to bring it up. I can't remember the last time I
> used an SVG viewer on the web much less a .NET applet.

Yes, but DOM is more general than just JavaScript and HTML.  Though the
scenario is contrived, it is definitely not far off from things I'm doing
embedding SVG in desktop applications.

> However, long term, there needs to be a mechanism for describing what the
> function is that you are trying to do. When I co-architected the Java
> Accessibility API with sun we provided a mechanism for listing the
> accessible actions of an object in Java. Each action had a description and
> a pre-defined action. We extended this to a document to be able to
> enumerate the list of links available in a document and their description.
> The Self Voicing Kit we developed at IBM allowed us to pop up a dialog box
> of the functions and have the user select the one they wanted to execute.
> It wran outside the application being spoken but the API provision allowed
> us to extract the options and allow the user to activate them.

That approach where a document discloses actions seems a lot more practical
than trying to decipher the intent of potentially compiled or remote code.

> Specifying exactly how you implement each solution (SVG, .NET applet, Java
> applet) requires a great deal of long term work that you are not going to
> solve today.
> >The mutation events contain both the old and new value, spurious mutation
> >events could have serious side-effects.
> Fair enough. But to be honest, I don't know why anyone would want to
> intentionally fire a mutation event and if they did then let them suffer
> the consequences.

But as an application developer using DOM primarily with Java and generally
not for UI issues, now I would have to program much more defensively when
there are obvious ways that a hostile agent could be attacking my app.  For
example, I'd might feel the need to repeatedly add my event listener back
since being able to enumerate the listeners would allow external code remove
my listener without my approval or knowledge.

> >a) Is the desire to make it possible for an accessibility tool to make
> >behavior embedded in hostile web content accessible?  Or is it sufficient
> to
> >enable web content that discloses information that can be used by an
> >accessibility tool?
> I am not sure what you mean by hostile. The problem with "disclosing
> information" is that content authors seldom disclose information. Just
> getting developers to produce alt-text in documents is difficult.

Probably should have said "unfriendly".  Basically a web page that was not
trying to be accessible.

> >b) Is the DOM the right place to do this?  Accessibility tools will still
> >have to mine the HTML content to find content that implies non-script
> >actions (like submit buttons).  Would it be reasonable to have content
> >provide non-visible elements that provide accessibility clues?
> Yes, the DOM is the right place to do this. The WAI User Agents Group as
> the central mechanism to gain access to accessibility information. The
> "clues" implies that an assistive technology needs to make a guess. I
> think that's what you mean. You need to explain more about what you mean
> non-visible elements.

If the content is going to describe actions that could be performed, there
would appear to be two ways: either code could respond to an event that
requests descriptions of available "commands", or the document could contain
elements that declare commands that could be invoked from an accessibility
tool.  An example of the second approach would be something like:

<svg width="100%" height="100%"...>
    <script type="text/ecmascript">...</script>
    <rect x="10" y="10" height="10" width="10" fill="red"
    <otherns:command onexecute="describeRedBox(evt)">
        <description>Describe Red Box</description>

The otherns:command element is what I, as someone way out of my domain,
would call an non-visible accessibility clue.  It doesn't show up in the
rendering of the drawing, but an accessibility tool could locate it and
either voice the describe, fabricate a menu for a selection device, or some
other appropriate rendering.
Received on Tuesday, 18 December 2001 15:07:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 20 October 2015 10:46:09 UTC