W3C home > Mailing lists > Public > wai-xtech@w3.org > May 2007

Adding a channel property for addressing the live region politeness problem and the multisynthesizer problem

From: Charles L. Chen <clc@clcworld.net>
Date: Wed, 16 May 2007 12:11:26 -0500
Message-ID: <464B3B3E.3060901@clcworld.net>
To: wai-xtech@w3.org

This following was posted on the mozilla.dev.accessibility newsgroup:

One solution that will address both the live region priority problem and 
the multisynthesizer problem is to simply add a new property called 
channel. channel="general" (default) or channel="notify". Messages can 
be put into either channel. Within the channel, everything behaves as 
before. That is, polites wait until the user idles, assertives will 
cancel out polites and be spoken as soon as possible, and rudes will 
interrupt and cancel out anything so that they are spoken immediately. 
However, messages from one channel cannot cancel or interrupt messages 
from another channel. How the messages are actually presented to the 
user depends on the AT that the user has running.

If the user is on a screen reader with only one synthesizer and there 
are messages in both channels, then:
1. The channel with the highest priority message gets picked for 
processing. So if the "general" channel has an assertive and the 
"notify" channel only has polites, the "general" channel will be processed.

2. If both channels have the same priority for their highest priority 
message, then the "notify" channel will go first. So if both channels 
only have polites, then the "notify" channel will be processed first.

If the user is on a screen reader with two or more synthesizers, then:
each channel has its own synthesizer and both channels get processed 

If the user is on a screen reader that also has braille output and has 
chosen to use both outputs, then:
one channel will be sent on the audio and the other channel will be sent 
on the braille. Which goes where is a user setting (but my guess is that 
users will want "notify" on braille and "general" on audio).

This solves the problem for chat since messages directed at the user 
since those can be set to channel="notify" and be spoken before the 
general chat messages without clearing any of them out. This solution 
also lays the foundation for multisynthesizer output and encourages 
developers to explicitly  give defaults for what should be on the same 
channel. Finally, for users who are on both speech and braille output, 
since the users have final say via the AT of how the ARIA live region 
properties should behave, they have the flexibility of putting messages 
for different regions into different channels. I can imagine a user 
wanting to hear the news headlines on the audio channel and having the 
braille display show updates to the stock prices.

This solution has already been implemented in a prototype version of 
Fire Vox and in Reef Chat. Those who want to try it out can get the 
prototype version of Fire Vox by installing the following 3 files:

and then going to Reef Chat at

Sina Bahram, Gijs Kruitbosch, and Peter Thiessen have already considered 
this idea and like it. Also, Reef Chat running with the Fire Vox 
prototype was shown at W4A and at Access U last week, and this behavior 
was well received.

Any thoughts?


P.S. Also, I should add that since the default is chat="general", web 
developers who don't have applications that could benefit from multiple 
synthesizers/complex priority systems can just pretend that this 
property never existed and do everything as before. So it doesn't 
complicate things for the simple cases.
Received on Wednesday, 16 May 2007 20:38:14 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:51:32 UTC