Re: [SMIL30 LC comment] Feedback on the SMIL 3.0 LCWD specification from the Multimodal Interaction WG ( LC-1836 LC-1834)

 Dear Kazuyuki Ashimura ,

The SYMM Working Group has reviewed the comments you sent [1] on the Last
Call Working Draft [2] of the Synchronized Multimedia Integration Language
(SMIL 3.0) published on 13 Jul 2007. Thank you for having taken the time to
review the document and to send us comments!

The Working Group's response to your comment is included below.

Please review it carefully and let us know by email at www-smil@w3.org if
you agree with it or not before 20 Nov 2007. In case of disagreement, you
are requested to provide a specific solution for or a path to a consensus
with the Working Group. If such a consensus cannot be achieved, you will
be given the opportunity to raise a formal objection which will then be
reviewed by the Director during the transition of this document to the
next stage in the W3C Recommendation Track.

Thanks,

For the SYMM Working Group,
Thierry Michel
W3C Staff Contact

 1. http://www.w3.org/mid/46F01650.8030506@w3.org
 2. http://www.w3.org/TR/2007/WD-SMIL3-20070713/


=====

Your comment on Synchronized Multimedia Integration Language (SMIL
3.0)...:
> c) Answered in another place:
> 
> 2.5 Is an IRI acceptable for a URI?
> 
> => Please see resolution at LC-1825, and CC response to Kazuyuki
> Ashimura <ashimura@w3.org>.


Working Group Resolution (LC-1836):
This issue of using IRIs instead of (or along with) URIs is discussed in
our response to LC-1825. (The short answer is: yes). Please review the
LC-1825 response for details.

----

Your comment on Synchronized Multimedia Integration Language (SMIL
3.0)...:
> a) SMIL integration:
> 
> 1.1 We think the biggest question about SMIL 3.0 LCWD [1] for MMIWG is
> "how SMIL works with MMI Architecture [2] and SCXML [3]". So we
> concentrated on the interface and functionality related to the
> architecture.
> 
> 1.2 Maybe we (=SYMM-WG and MMI-WG) should hold some discussion about
> the requirements on the interfaces. One possibility might be coming
> Tech Plenary in November.
> 
> 2.2 How speech/pen input could work with SMIL in interactive
> applications?
> 
> 2.3 How SMIL cooperate with other flow control languages e.g.
> SCXML [3] ?
> 
> e.g.
> Can SCXML invoke a "SMIL modality" to present information from the
> user? And/or can SMIL invoke SCXML to provide user interaction
> (as opposed to only rendering to the user)?
> 
> Please see also the comment 5.4 below.
> 
> 2.4 Can we use SMIL more than once in a specific application?
> 
> e.g.
> Can we use SMIL for (1) a talking head module which synchronize
> speech synthesis and lip movement synthesis and (2) the main
> server which control the syncronization with HTML based GUI etc.
> 
> 2.6 How SMIL handle information which needs security e.g. credit card
> numbers?
> 
> Regarding "15.7 The SMIL StateSubmission Module"
> -------------------------------------------------
> 
> 5.4 What kind of data and event exchanges should be considered in
> practical SMIL ready multimodal applications?
> 
> e.g.
> How SMIL and SCXML [3] relate to each other? Can SCXML invoke a
> "SMIL modality" to present information from the user? Can SMIL
> invoke SCXML to provide user interaction (as opposed to only
> rendering to the user)?


Working Group Resolution (LC-1834):
The SYMM working group is interested in exploring the possibilities of
integration with the Multimodal Interaction working group. It seems clear
that SMIL can play a major role for the output modality of applications.
We propose, thus, to include in our agendas an inter work-package meeting
during the W3C plenary session to be held in Boston in November 2007.

More detailed comments on the actual issues:
---------------------------------------------------------
Q: 1.1 We think the biggest question about SMIL 3.0 LCWD [1] for MMIWG is
"how SMIL works with MMI Architecture [2] and SCXML [3]". So we
concentrated on the interface and functionality related to the
architecture.

A: In our view, the main problem is that there might be a conflict of
engines. SMIL has its own engine and the MMI architecture has its own
based on a state machine. We foresee two possible solutions:
 1. SMIL 3.0 offers a solution for external timing (Timesheet in chapter
13). One solution would be to use a host language such as XHTML, together
with CSS, and Timesheets. Then, the MMI architecture should be able to
integrate with a standard web solution such as XHTML.
 2. The second option is that the SMIL engine takes care of the actual
output modality of an application. So, the state machine can use a
different SMIL file for the decided modality. Other alternative will be to
make use of SMIL State (chapter 15) together with a SMIL presentation that
includes switches.
--------------------------------------------------------------------

-------------------------------------------------------------------
Q: 2.2 How speech/pen input could work with SMIL in interactive
applications?

A: The current set of events can be extended. Please note that the
activation of an event does not have to be a click.
-----------------------------------------------------------------

-------------------------------------------------------------------
Q: 2.3 How SMIL cooperate with other flow control languages e.g.
SCXML [3] ?

e.g.
Can SCXML invoke a "SMIL modality" to present information from the
user? And/or can SMIL invoke SCXML to provide user interaction
(as opposed to only rendering to the user)?

A: Yes, that is one of the solutions presented above (solution 2); in
which SCXML engine calls the SMIL engine. But in that case, we should take
care that both engines do not conflict with each other.
------------------------------------------------------------------

Q: 2.4 Can we use SMIL more than once in a specific application?

e.g.
Can we use SMIL for (1) a talking head module which synchronize
speech synthesis and lip movement synthesis and (2) the main
server which control the syncronization with HTML based GUI etc.

A: Yes, that is possible. The Daisy profile (chapter 20) defines a few
(name, value) pairs for the "param" element. For example, the Daisy
profile defines "daisy:use-renderer" / "avatar" param specification, which
has just been reviewed by the Daisy specification committee. 

In the example of your comment. When the "src" attribute of the "text"
element points to a SSML (Synthesized Speech Markup Language) document, a
compatible user-agent would use the appropriate renderer, such as a TTS
audio module and a talking head avatar with lips synchronization. The
"param" mechanism can be used to pass configuration parameters to the
renderer module (e.g. voice, face, speed, etc.). 

------------------------------------------------------------------
Q: 2.6 How SMIL handle information which needs security e.g. credit card
numbers?

A: SMIL does not provide a solution for that kind of use case. In order to
solve that problem we might need a more complete solution such as XForms
(in combination with XHTML and Timesheet)
-------------------------------------------------------------------

-------------------------------------------------------------------
Q: Regarding "15.7 The SMIL StateSubmission Module"

5.4 What kind of data and event exchanges should be considered in
practical SMIL ready multimodal applications?

e.g.
How SMIL and SCXML [3] relate to each other? Can SCXML invoke a
"SMIL modality" to present information from the user? Can SMIL
invoke SCXML to provide user interaction (as opposed to only
rendering to the user)?

A: see the answer to question 2.3
------------------------------------------------------------------

Note: this comments do not require any modification in SMIL 3.0; the
resolution is to held an inter-workpackage meeting in order to further
discuss the different alternatives of integration


----

Received on Monday, 12 November 2007 15:32:05 UTC