- From: Jim Barnett <1jhbarnett@gmail.com>
- Date: Wed, 17 Jun 2015 11:41:54 -0400
- To: "www-voice@w3.org (www-voice@w3.org)" <www-voice@w3.org>
- Message-ID: <55819542.5020806@gmail.com>
Here is a call for chapters for an upcoming book that may be of interest to people on this list: Multimodal Interaction with W3C Standards: Towards Natural User Interfaces to Everything Editor: Deborah A. Dahl, Conversational Technologies to be published by Springer Call for Chapters From tiny fitness trackers to huge industrial robots, we are interacting today with devices with shapes, sizes, and capabilities that would have been hard to imagine when the traditional graphical user interface (GUI) first became popular in the 1980’s. It is becoming increasingly apparent that the decades-old GUI interface is a poor fit for today’s computer-human interactions, as we move farther and farther away from the classic desktop paradigm, with input limited to mouse and keyboard and a large screen as the only output modality. While the growth of touch interfaces has been especially dramatic, we are now also starting to see applications that make use of many other forms of interaction, including voice, handwriting, emotion recognition, natural language understanding, and object recognition. As these forms of interaction (modalities) are combined into systems, the importance of having standard means for them to communicate with each other and with application logic is apparent. The sheer variety and complexity of multimodal technologies makes it impractical for implementers other than very large organizations to handle the full range of possible modalities (current and future) with proprietary API's. To address this need, the World Wide Web Consortium (W3C) has developed a comprehensive set of standards for multimodal interaction which are well-suited as the basis of interoperable multimodal applications. However, most of the information about these standards is currently available only in the formal standards documents, conference presentations, and a few academic journal papers. All of these can be hard to find, and are not very accessible to most technologists. In addition, papers on applications that use the standards are similarly scattered among many different resources. This book will address this gap with clearly-presented overviews of the full suite of W3C multimodal standards. In addition, to illustrate the standards in use, it will also include case studies of a number of applications that use the standards. Finally, a future directions section will discuss new ideas for other standards as well as new applications. We invite submissions of potential chapters to be included in this book. Topics of Interest A. Overviews of the following standards: 1. Multimodal Architecture and Interfaces -- building applications from multiple modalities 2. Discovery and Registration – finding and integrating components into dynamic systems 3. EMMA: Extensible Multimodal Annotation—representing user inputs from speech recognition, natural language understanding, handwriting recognition, gesture, and camera. 4. InkML: Ink Markup Language – representing drawings and handwriting with “electronic ink” 5. EmotionML: Emotion Markup Language – representing human emotions 6. Creating an MMI Architecture-compliant modality component: Modality Component design best practices (see http://www.w3.org/2002/mmi/ for the standards documents for items 1-6) 7. Voice Standards: Handling speech: VoiceXML, SSML, SRGS, SISR, PLS 8. SCXML: State Chart XML—declarative handling of events with a state machine (see http://www.w3.org/Voice/ for the standards documents for items 7-8) 9. WebRTC: Web Real Time Communications: handling media on the web (see http://www.w3.org/2011/04/webrtc/ for the standards documents for item 9) B. Applications using the W3C standards for multimodal interaction (Types of applications include – but are not limited to – the following): 1. Applications that make use of the standards listed above in multimodal applications. 2. Implementations of the standards, including but not limited to open source implementations. 3. Evaluations of systems using the standards, including interoperability testing C. Future directions: 1. chapters concerning the evolution of multimodal standards 2. where new standards are needed 3. integration with related standards. Submission Procedure Researchers and practitioners are invited to submit a 1-3 page chapter abstract clearly explaining the topic of the proposed chapter. This helps as a chapter registration for the final submission. Chapter registrations are intended to help detecting and avoiding duplicate or similar chapters in advance. Submission of abstracts must be done through the EasyChair system, with the link: https://easychair.org/conferences/?conf=mmistandards2015 Authors of accepted abstracts will be notified about the status of their abstracts and will be sent chapter guidelines. Full chapters must be submitted by March 25, 2016 through EasyChair. The chapter should not exceed 25 pages with respect to Springer format (guidelines will be supplied). All submitted chapters will be reviewed on a single-blind review basis. Contributors may also be requested to serve as reviewers for this project. For additional information regarding the publisher, please visit www.springer.com <http://www.springer.com>. Schedule Abstract Submission: September 18, 2015 Abstract feedback: October 2, 2015 Full Chapters Due: January 15, 2016 Chapter Acceptance Notification and feedback: February 26, 2016 Revised Version Due Date: March 25, 2016 Final Notification: May 6, 2016 Estimated Publication Date: October, 2016 More information –http://www.mmi-standards.com <http://www.mmi-standards.com>
Received on Wednesday, 17 June 2015 15:42:27 UTC