- From: JOHNSTON, MICHAEL J (MICHAEL J) <johnston@research.att.com>
- Date: Thu, 2 Oct 2008 15:59:49 -0400
- To: <www-multimodal@w3.org>, "JOHNSTON, MICHAEL J (MICHAEL J)" <johnston@research.att.com>
- Message-ID: <0C50B346CAD5214EA8B5A7C0914CF2A401365B6C@njfpsrvexg3.research.att.com>
Many thanks for your support of EMMA. The specific comments your bring up have been discussed in detail by the EMMA subgroup and they have formulated the following responses. Could you please confirm on the public list, www-multimodal@w3.org if this resolution of the issues is acceptable. 3.1 Suggest use of emma:literal for raw recognition results as well as for literal semantic results from language understanding. RESPONSE: We agree that this should be clarified but note that the specification as it currently stands does in fact allow for the use of emma:literal for raw recognition results. We have clarified this in the new draft of the specification. 3.2 Request removal of test 1501 since the use of emma:uninterpreted to indicate below threshold input is not described in the EMMA specification. RESPONSE: We agree and have removed this test assertion. 3.3 Request removal of tests 902 and 903 since these constraints on the resource attribute on emma:derived-from are not described in the EMMA specification. RESPONSE: We agree and have removed these test assertions. 3.4 Request removal of test assertion 801 for inline emma:model since this test contradicts the EMMA schema, in which emma:model can only be a child of emma:emma RESPONSE: We agree and have removed this test assertion. best Michael Johnston on behalf of the EMMA subgroup AT&T EMMA Implementation Report Executive Summary: AT&T recognizes the crucial role of standards in the creation and deployment of next generation services supporting more natural and effective interaction through spoken and multimodal interfaces, and continues to be a firm supporter of W3C's activities in the area of spoken and multimodal standards. As a participating member of the W3C Multimodal Interaction working group, AT&T welcomes the Extensible Multimodal Annotation (EMMA) 1.0 Candidate Recommendation. EMMA 1.0 provides a detailed language for capturing the range of possible interpretations of multimodal inputs and their associated metadata through a full range of input processing stages, from recognition, through understanding and integration, to dialog management. The creation of a common standard for the representation of multimodal inputs is critical in enabling rapid prototyping of multimodal applications, facilitating interoperation of components from different vendors, and enabling effective logging and archiving of multimodal interactions. AT&T is very happy to contribute to the further progress of the emerging EMMA standard by submitting an EMMA 1.0 implementation report. EMMA 1.0 results are already available from an AT&T EMMA server which is currently being used in the development of numerous multimodal prototypes and trial services. Technical Details: Suggest use of emma:literal for raw recognition results as well as for literal semantic results from language understanding. Request removal of test 1501 since the use of emma:uninterpreted to indicate below threshold input is not described in the EMMA specification. Request removal of tests 902 and 903 since these constraints on the resource attribute on emma:derived-from are not described in the EMMA specification. Request removal of test assertion 801 for inline emma:model since this test contradicts the EMMA schema, in which emma:model can only be a child of emma:emma.
Received on Thursday, 2 October 2008 20:00:28 UTC