W3C home > Mailing lists > Public > www-math@w3.org > January 2014

Re: Digital Textbooks

From: Adam <adamsobieski@hotmail.com>
Date: Thu, 2 Jan 2014 08:56:33 +0000
Message-ID: <BAY402-EAS2453D01B5764CDE267115E4C5CA0@phx.gbl>
To: "www-math@w3.org" <www-math@w3.org>, "www-multimodal@w3.org" <www-multimodal@w3.org>, "www-voice@w3.org" <www-voice@w3.org>

Math Working Group,

Multimodal Interaction Working Group,

Voice Browser Working Group,



“Publications pertinent to multimodal input and mathematics include: Semi-synchronous Speech and Pen Input by Yasushi Watanabe, Kenji Iwata, Ryuta Nakagawa, Koichi Shinoda and Sadaoki Furui, Hamex - A Handwritten and Audio Dataset of Mathematical Expressions by Solen Quiniou, Harold Mouchère, Sebastián Peña Saldarriaga, Christian Viard-Gaudin, Emmanuel Morin, Simon Petitrenaud and Sofiane Medjkoune, Multimodal Mathematical Expressions Recognition: Case of Speech and Handwriting by Sofiane Medjkoune, Harold Mouchere, Simon Petitrenaud and Christian Viard-Gaudin, Multimodal Interfaces That Process What Comes Naturally by Sharon Oviatt and Philip Cohen and Developing Handwriting-based Intelligent Tutors to Enhance Mathematics Learning by Lisa Anthony.


“Publications pertinent to mathematical sketches and diagrams include: Mathematical Sketching: An Approach to Making Dynamic Illustrations by Joseph J. LaViola Jr, A Sketch-based System for Teaching Geometry by Gennaro Costagliola, Salvatore Cuomo, Vittorio Fuccella, Aniello Murano and Via Ponte Don Melillo, Intelligent Understanding of Handwritten Geometry Theorem Proving by Yingying Jiang, Feng Tian, Hongan Wang, Xiaolong Zhang, Xugang Wang and Guozhong Dai, Hierarchical Parsing and Recognition of Hand-sketched Diagrams by Levent Burak Kara and Thomas F. Stahovich, Combining Geometry and Domain Knowledge to Interpret Hand-drawn Diagrams by Leslie Gennari, Levent Burak Kara, Thomas F. Stahovich and Kenji Shimada and Multi-domain Sketch Understanding by Christine Alvarado.

“Publications pertinent to multimodal input, note-taking and context include: Speech Pen: Predictive Handwriting based on Ambient Multimodal Recognition by Kazutaka Kurihara, Masataka Goto, Jun Ogata and Takeo Igarashi, Development of Note-taking Support System with Speech Interface by Kohei Ota, Hiromitsu Nishizaki and Yoshihiro Sekiguchi, Unsupervised Vocabulary Selection for Real-time Speech Recognition of Lectures by Paul Maergner, Alex Waibel and Ian Lane, Dynamic Language Model Adaptation Using Presentation Slides for Lecture Speech Recognition by Hiroki Yamazaki, Koji Iwano, Koichi Shinoda, Sadaoki Furui and Haruo Yokota, Rhetorical Structure Modeling for Lecture Speech Summarization by Pascale Fung, Justin Jian Zhang, Ricky Ho Yin Chan and Shilei Huang and Topic Segmentation and Retrieval System for Lecture Videos based on Spontaneous Speech Recognition by Natsuo Yamamoto, Jun Ogata and Yasuo Ariki.”

(http://phoster.wordpress.com/2013/12/23/mathematics-educational-technology-and-multimodal-user-interfaces/)

The above publications discuss multimodal input and note-taking in classrooms, auditoriums, meetings and from Web content.  Yamazaki, et al, indicate that presentation slides can enhance speech and handwriting recognition, and, in addition to syllabi, digital textbooks can be processed to enhance multimodal input recognition.  Technologies such as document summarization, outlining, and topic modeling, as per Yamamoto, et al, may be of use to dynamic vocabulary models and multimodal interpretation contexts.

With regard to processing digital textbooks, in addition to textbook indices (http://www.idpf.org/epub/idx/) is the EPUB structural vocabulary (http://www.idpf.org/epub/vocab/structure/) as well as technologies like RDFa.

In addition to handwriting recognition contexts (http://msdn.microsoft.com/en-us/library/windows/desktop/ms702421(v=vs.85).aspx , http://msdn.microsoft.com/en-us/library/windows/desktop/ms700645(v=vs.85).aspx), dynamic language models and user, application and task lexicons, are the topics of expanding glyphic and notational contexts for handwritten mathematics input as well as the topics of contextual domains for sketch recognition, the recognition of diagrams, and data formats with which to indicate diagrammatic lexicons to handwriting and sketch recognition components.

Platforms can utilize multimodal input, data from multiple applications as well as data from materials such as syllabi, textbooks, documents and presentation slides to enhance contexts for multimodal input, speech and handwriting recognition.



Kind regards,

Adam Sobieski






From: Adam Sobieski
Sent: ‎Thursday‎, ‎January‎ ‎2‎, ‎2014 ‎12‎:‎00‎ ‎AM
To: www-math@w3.org, www-multimodal@w3.org, www-voice@w3.org





Math Working Group,

Multimodal Interaction Working Group,

Voice Browser Working Group,





Greetings.  With regard to digital textbooks and topics including mathematics, handwriting recognition, speech recognition, speech synthesis and multimodal interaction, here are some projects, publications and hyperlinks:







EPUB
http://idpf.org/epub/30

http://www.idpf.org/epub/301/spec/epub-changes.html





Web Components
http://www.polymer-project.org/

http://angularjs.org/

http://x-tags.org/





WebGL
http://www.khronos.org/webgl/

http://threejs.org/ (https://github.com/Polymer/three-js)
http://www.x3dom.org/





Social Reading and Annotations
http://www.webrtc.org/

http://graphics.cs.brown.edu/research/ReMarkableTexts/

http://liquidtext.net/

http://liris.cnrs.fr/advene/





Multimodal User Input
http://www.w3.org/TR/InkML/

http://www.w3.org/TR/speech-grammar/

http://www.w3.org/TR/semantic-interpretation/

http://www.w3.org/TR/emma11/

http://www.w3.org/TR/html5/embedded-content-0.html#the-canvas-element

http://www.w3.org/TR/html5/forms.html#the-input-element

http://msdn.microsoft.com/en-us/library/dd317324(VS.85).aspx
http://graphics.cs.brown.edu/research/pcc/research.html#mathpaper

http://cs.brown.edu/~jjl/mathpad/

http://cs.brown.edu/research/ptc/FluidMath.html

http://lurchmath.org/





Context and Multimodal User Input Recognition
http://www.idpf.org/epub/idx/

http://www.idpf.org/charters/2012/dictionaries/

http://www.w3.org/TR/html5/embedded-content-0.html#the-audio-element

http://www.w3.org/TR/html5/embedded-content-0.html#the-video-element

http://www.w3.org/TR/html5/embedded-content-0.html#the-track-element

http://msdn.microsoft.com/en-us/library/windows/desktop/ee318405(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/ee450772(v=vs.85).aspx




Speech Synthesis and Audio Overlays
http://www.w3.org/TR/speech-synthesis11/

http://www.idpf.org/epub/30/spec/epub30-mediaoverlays.html





Time-based Multimedia
http://html.adobe.com/edge/animate/

http://wam.inrialpes.fr/timesheets/ (http://www.w3.org/TR/timesheets/, https://github.com/timesheets/timesheets.js)
http://www.w3.org/TR/SMIL3/





Manifests and Bookmarking
http://w3c.github.io/manifest/





Desktop Search
hypertext
multimedia
3D graphics










Kind regards,




Adam Sobieski




http://phoster.wordpress.com/
Received on Friday, 3 January 2014 00:30:26 UTC

This archive was generated by hypermail 2.3.1 : Friday, 3 January 2014 00:30:27 UTC