Interactive Multimodal Mathematics-enhanced Optical Character Recognition

W3C Math Working Group,
W3C MathML4 Community Group,

Hello. I would like to share a hyperlink to recent content about interactive multimodal mathematics-enhanced optical character recognition: https://github.com/immersive-web/proposals/issues/64 . As envisioned, users could use XR devices to interactively scan content – including mathematics – from paper, chalkboards, dry-erase boards, and other surfaces. Scenarios of interest involve STEM education and collaboration.

Input modalities include video, audio, as well as pointing and/or eye tracking as written content is read aloud. Output is envisioned as HTML and MathML.


Best regards,
Adam Sobieski

P.S.: See also: Pen-centric Computing (https://github.com/immersive-web/proposals/issues/65). Users could also, via XR devices and multimodal computer vision, author digital mathematics-enhanced documents while simultaneously writing on paper, chalkboards, dry-erase boards, or other surfaces with pencils, pens, chalk, dry-erase markers, or XR stylus devices. Technologies involved, for these scenarios, include InkML and MathML.

Received on Saturday, 9 January 2021 08:03:19 UTC