Re: Intro - Multi-device Timing CG

Thank you for your input Ingar (I assume this is your firstname?)

The "timing object" certainly looks like a useful and powerful API.
If I am not mistaken this proposal focuses mainly on programmatic usage?
(Javascript)

If so, do you envision some kind of declarative syntax that would allow
content creators (web and digital publishing) to encode a "static" /
persistent representation of synchronized multi-media streams?
For example EPUB3 "read aloud" / "talking books" are currently authored
using the Media Overlays flavour of SMIL (XML), and long-form synchronized
text+audio content is typically generated via some kind of semi-automated
production process.

I am thinking specifically about: (1) an HTML document, (2) a separate
audio file representing the pre-recorded human narration of the HTML
document, and (3) some kind of meta-structure / declarative syntax that
would define the synchronization "points" between HTML elements and audio
time ranges.
Note that most existing "talking book" implementations render such combined
text/audio streams by "highlighting" / emphasizing individual HTML
fragments as they are being narrated (using CSS styles), but the same
declarative expression could be rendered with a karaoke-like layout, etc.
Of course, there are also other important use-cases such as video+text,
video+audio, etc., but I just wanted to pick your brain about a concrete
use-case in digital publishing / EPUB3 e-books :)

Cheers, and thanks!
Daniel





On 11 March 2018 at 21:34, Ingar Mæhlum Arntzen <ingar.arntzen@gmail.com>
wrote:

> Hi Marisa
>
> Chris Needham of the Media & Enternainment IG made me aware of the CG your
> setting up.
>
> This is a welcome initiative, and it is great to see more people
> expressing the need for better sync support on the Web !
>
> I'm the chair of Multi-device Timing CG [2], so I thought I'd say a few
> words about that as it seems we have similar objectives. Basically, the
> scope of the Multi-device Timing CG is a broad one; synchronization of
> anything with anything on the Web, whether it is text synced with A/V
> within a single document, or across multiple devices. We have also proposed
> a full solution to this problem for standardization, with the timing object
> [3] being the central concept. I did have a look at the requirements
> document [4] you linked to, and it seems to me the timing object (and the
> other tools we have made available [5]) should be a good basis for
> addressing your challenges. For instance, a karaoke-style text presentation
> synchronized with audio should be quite easy to put together using these
> tools.
>
> If you have some questions about the model we are proposing, and how it
> may apply to your use cases, please send them our way :)
>
> Best regards,
>
> Ingar Arntzen
>
> [1] https://lists.w3.org/Archives/Public/public-sync-media-pub/2
> 018Feb/0000.html
> [2] https://www.w3.org/community/webtiming/
> [3] http://webtiming.github.io/timingobject/
> [4] https://github.com/w3c/publ-wg/wiki/Requirements-and-design-
> options-for-synchronized-multimedia
> [5] https://webtiming.github.io/timingsrc/
>
>

Received on Monday, 12 March 2018 18:04:35 UTC