W3C home > Mailing lists > Public > www-smil@w3.org > July to September 2007

[SMIL30 LC comment] Feedback on the SMIL 3.0 LCWD specification from the Multimodal Interaction WG

From: Kazuyuki Ashimura <ashimura@w3.org>
Date: Wed, 19 Sep 2007 03:17:52 +0900
Message-ID: <46F01650.8030506@w3.org>
To: www-smil@w3.org
CC: W3C Multimodal group <w3c-mmi-wg@w3.org>

Dear SYMM group,

Sorry for the big delay.

The Multimodal Interaction Working Group has prepared the following
feedback on the Synchronized Multimedia Integration Language (SMIL
3.0) specification Last Call Working Draft.

Please feel free to contact us for clarification on any of these


Kazuyuki Ashimura
Multimodal Interaction Activity Lead
for the W3C Multimodal Interaction Working Group

1. Meta comments

1.1 We think the biggest question about SMIL 3.0 LCWD [1] for MMIWG is
   "how SMIL works with MMI Architecture [2] and SCXML [3]".  So we
   concentrated on the interface and functionality related to the

1.2 Maybe we (=SYMM-WG and MMI-WG) should hold some discussion about
   the requirements on the interfaces. One possibility might be coming
   Tech Plenary in November.

2. General comments

2.1 We're afraid SMIL 3.0 specification is very big, and would like to
   suggest the SYMM group provide a primer document like "XHTML+SMIL
   Profile" [4] which mainly describes how to integrate SMIL modules
   and other specs e.g. HTML and SVG so that authors can generate
   SMIL ready Web applications easily.

   The document should include:
   - 1.4.2 Profiles affected by SMIL 3.0
   - 22.5 SMIL 3.0 Document Scalability Guidelines
   - etc.

2.2 How speech/pen input could work with SMIL in interactive

2.3 How SMIL cooperate with other flow control languages e.g.
   SCXML [3] ?

   Can SCXML invoke a "SMIL modality" to present information from the
   user?  And/or can SMIL invoke SCXML to provide user interaction
   (as opposed to only rendering to the user)?

   Please see also the comment 5.4 below.

2.4 Can we use SMIL more than once in a specific application?

   Can we use SMIL for (1) a talking head module which synchronize
   speech synthesis and lip movement synthesis and (2) the main
   server which control the syncronization with HTML based GUI etc.

2.5 Is an IRI acceptable for a URI?

2.6 How SMIL handle information which needs security e.g. credit card

3. Comment on "11.4 Language definition"

Regarding "11.4.3 Attributes"

3.1 Following text bothers other text around it. So the style should
   be modified:
Since the event will never be raised on the specified element, the
event-base value will never be resolved.

4. Comments on "11.6 Document object model support"

4.1 Maybe something is wrong in following sections:
11.6.6 Java language binding
11.6.7 org/w3c/dom/smil/ElementTimeControl.java:
11.6.8 org/w3c/dom/smil/TimeEvent.java:

- The content of 11.6.6 is empty. Maybe 11.6.7 should be and
 11.6.8 should be ?

- The titles of 11.6.7 and 11.6.8 should be rather "org.w3c.dom.smil/..."?

5. Comments on "15. SMIL 3.0 State"

Regarding "15.2 Introduction"

5.1 "etc" in "higher priority node has preempted them, etc" should be
   "etc." (missing "." at the end of the sentence).

Regarding "15.6.4 Examples"

5.2 Is this example really the one for "15.6 The SMIL UserState Module"?
   It seems a bit strange because it says "Here is a SMIL 3.0 Language
   Profile example" but doesn't mention <state> element at all.

Regarding "15.6.6 Data Model Events"

5.3 "contentControlChange(attrname)" event and "contentControlChange"
   event should be also raised by each state, considering the
   possibility of substate or Russian-doll usage of SMIL.

Regarding "15.7 The SMIL StateSubmission Module"

5.4 What kind of data and event exchanges should be considered in
   practical SMIL ready multimodal applications?

   How SMIL and SCXML [3] relate to each other?  Can SCXML invoke a
   "SMIL modality" to present information from the user?  Can SMIL
   invoke SCXML to provide user interaction (as opposed to only
   rendering to the user)?

Regarding "15.7.4 Examples"

5.5 The examples for this StateSubmission Module must be added ASAP,
   because I can't imagine how to use <submission> element or <send>
   element without the examples.

6. Comment on "17. SMIL 3.0 Language Profile"

Regarding "17.4.13 Timing and Synchronization Modules"

6.1 "Supported Event Symbols" should be links to their detailed

7. Comments on "20. SMIL 3.0 DAISY Profile"

Regarding "20.4.7 Media Object Modules"

7.1 There is some simple description of <param> element for TTS
   (=speech output) here, however, no description on ASR (=speech
   input).  What should we do to integrate speech recognition
   (=speech input)?

 e.g. (of TTS)
 <param name="daisy:use-renderer" value="tts"/>
 <param name="daisy:renderer-parameters" value="voice=joe"/>

Regarding "20.4.13 Playback Guidelines"

7.2 The link to "UAAG checkpoint 2.4" should be
   not "http://www.w3.org/tr/UAAG10/guidelines.html#tech-time-independent"
   but "http://www.w3.org/TR/UAAG10/guidelines.html#tech-time-independent"
("TR" should be capital letters)

8. Comment on "21. SMIL 3.0 Tiny Profile"

Regarding "21.3 SMIL 3.0 Tiny Profile"

8.1 M3U, PLP, PLS and WPL should be described by <acronym>

9. Comment on "Appendix A. SMIL 3.0 DTDs"

9.1 All the headings have duplicated section number e.g. "A.1 A.1".

10. Comments on "Appendix E. SMIL 3.0 Reference"

10.1 The title of this page is "SMIL 2.1 References" which should be
    "SMIL 3.0 References".

10.2 There is an extra ">" (&gt;) right before the <h1> heading.

[1] http://www.w3.org/TR/SMIL3/smil30.html
[2] http://www.w3.org/TR/2006/WD-mmi-arch-20061211/
[3] http://www.w3.org/TR/2007/WD-scxml-20070221/
[4] http://www.w3.org/TR/2002/NOTE-XHTMLplusSMIL-20020131/

Kazuyuki Ashimura / W3C Multimodal & Voice Activity Lead
mailto: ashimura@w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171
Received on Tuesday, 18 September 2007 18:17:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:34:27 UTC