ACTION-1654: Take preliminary look at discovery & registration of multimodal modality components: state handling

Action track: https://www.w3.org/WAI/PF/Group/track/actions/1654 

Reference: http://www.w3.org/tr/mmi-mc-discovery/

 

The reviewed document is a first public working draft of 2015-06-11.  It
builds upon the Multimodal Architecture Specification [1], and defines a
framework for adding and removing Modality Components in a dynamic
Multimodal System.  

 

Here are my findings after review of the doc:

 

(a)    The document is highly interwoven with [1], and contains detailed
extensions of [1].  It appears that this document is written with the
purpose to fix errors and omissions in [1].  It would be better to revise
[1] and include the additional information there, to have one coherent
document rather than two interwoven documents.

(b)    The document is not an easy read.  It would be good to rewrite the
introduction to explain the purpose of this specification in simple terms,
and to include an example.

(c)     Section 2, Domain Vocabulary, contains definitions that would rather
belong to [1] (e.g. multimodal system, modality component, interaction
manager).

(d)    All figures in the doc are available as high-res images upon mouse
click, but this functionality is not available by keyboard. E.g. <img .
onclick="
window.open('http://upload.wikimedia.org/wikipedia/commons/3/38/Push_high.pn
g','Push');">.  This should be made an <a> element to allow for keyboard
interaction.  There are Web users who use no mouse, but would like to see
the images in high resolution.  Also, all figures lack a long description.

(e)    The most interesting part for PF is the context object. However, the
doc contains no specification for context. In [1], context data is simply
not defined: "The format and meaning of this data is application-specific."
If [1] was to be revised, it would be good to provide examples of context
data which can be used to define a user's preferences (e.g. pointer to a
GPII personal preference set), a device's characteristics, and situational
parameters.  However, it is not clear if the context data could also include
information on dynamic aspects of the interaction, e.g. a sudden increased
noise level around the user.

 

References:

[1] Multimodal Architecture and Interfaces.  W3C Recommendation 25 October
2012. http://www.w3.org/TR/2012/REC-mmi-arch-20121025/ 

 

___________________________________________________

 

 Prof. Dr. Gottfried Zimmermann

Mobile User Interaction

___________________________________________________

 

Received on Wednesday, 16 September 2015 16:18:16 UTC