MMI ARIA comments

Dear PFWG,
The Multimodal Interaction Working Group has a few comments on the ARIA [1]
spec.

1. The tags being proposed by ARIA might be picked up
by assistive technologies such as a speech-based command and control
system. For example, an assistive device for a blind person that sees an
ARIA tag 
like "menu" could speak something like "you have three choices, chocolate,
strawberry,
or vanilla and you can only select one"? It might be helpful to
actually describe that kind of use case in the document. 
(Thanks to Jerry Carter for pointing out this use case.)

2. All of the roles seem to be oriented toward GUI applications, but audio
output can also potentially be part of a web application. Are there
similar abstractions over the semantic roles of audio elements? The only
reference to a possible audio interface item is an "alert", but is there
something, for example, analogous to a VoiceXML [2] prompt (that is, an
invitation to provide input)?  

3. Finally, it seems that many of the ARIA abstractions which were 
designed to accommodate assistive devices could also potentially 
be independently helpful in multimodal applications by providing 
modality-independent descriptions of the semantics of interface components. 
Among other benefits, this would enable application designers to concentrate

on the semantics of an interaction rather than the details of how the
application 
is presented to the user. It would be useful to explore some of the 
ARIA ideas in the context of multimodal interaction going forward.

regards,

Debbie Dahl
MMI Working Group Chair
for the Multimodal Interaction Working Group

[1] WAI-ARIA draft http://www.w3.org/TR/wai-aria/
[2] VoiceXML http://www.w3.org/TR/voicexml21/

Received on Friday, 17 April 2009 20:59:51 UTC