- From: Njaal Borch <njaal.borch@norut.no>
- Date: Thu, 7 May 2015 22:32:22 +0200
- To: Daniel Davis <ddavis@w3.org>
- Cc: "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
- Message-ID: <CAOc996sjLsyDrsCqwHpZ8Gc6723H=FpKBFk3RVfhD28ZFFSpgw@mail.gmail.com>
Hi Daniel & all, The discussion is very interesting, thanks for letting us know! In the Multi Device Timing Group, we have many similar challenges, and we’d like to point out some of our experiences that could be inspirational or potential pieces of the puzzle. We have had a look at some of the MSE Ad Insertion Use Cases [1]. As we synchronize multiple browsers between different operating systems, we typically experience the worst case scenario, where media resources have different codecs, bit rates and even reside on different servers. They are also transported and processed separately. Still, depending a bit on the browser, we are able to synchronize this to within frame accuracy [2], down to just 1 millisecond for multiple devices playing audio in Chrome or about 7ms between Chrome and Firefox playing video [3]. Our approach is to conceptually map media and data onto a timeline, then use an explicit timing object to control the progress of the presentation. That means that we do not let a media element be the master of an experience (e.g. drive the track element). Instead, media elements and track elements are both slaves to the timing object. This decoupling limits the complexity while also making it very easy to switch seamlessly between different media elements, as they for instance can overlap (with one hidden), be cross faded using simple CSS or be replaced by audio-only (bandwidth savings) etc. In addition, we have created something like a generic track element (a sequencer, aka “MovingCursor”) which provides us with millisecond precision upcalls. For media synchronization, we have implemented a simple MediaSync wrapper which modifies the currentTime and playbackRate properties to approximate the ideal position given by the timing object. We have demonstrated that this approach provides excellent results while minimising complexity of media elements. It further extends the high levels of flexibility we are used to from HTML, opening for tight timing for any data type in a device independent and interoperable way. Having this core functionality standardized would give even more consistent experiences, and would likely add very little complexity to the media elements. It would provide a generic and common timing model for the Web. The Multi Device Timing CG has a concrete suggestion for an HTMLTimingObject [4], which (only) provides high precision timing. While a somewhat similar idea to the MediaController, it provides a clearer separation of concerns. The HTMLTimingObject can be used locally (synchronize multiple elements) as well as remotely, using the concept of Shared Motions for multi-device timing. For further details, a recent paper [5] gives a high-level overview. We have a simple demonstration putting it all together, showing how we “render” our YouTube videos in HTML [6]. A live version of this demo is available as well, but beware that it is a shared experience (no logins), so anyone in the world can start controlling it. Controls: [7], rendered result [8]. These can of course be run on different or multiple devices. If anyone wants to experiment first hand, a commercial online timing service is provided by the Motion Corporation [9]. Click “developer” and follow the howtos - it will likely take you less than 20 minutes to have a collaborative video playing! [1]: https://www.w3.org/wiki/HTML/Media_Task_Force/MSE_Ad_Insertion_Use_Cases [2]: https://www.youtube.com/watch?v=lfoUstnusIE [3]: http://mcorp.no/publications/dist_html5_sync_2014.pdf [4]: http://webtiming.github.io/timingobject/ [5]: http://mcorp.no/publications/composition_2015.pdf [6]: https://www.youtube.com/watch?v=oK6gbU4w7_Q [7]: http://mcorp.no/examples/film/ [8]: http://mcorp.no/examples/film/vid.html [9]: http://motioncorporation.com Hope you find some of these ideas interesting, and if you like we would be very happy to demonstrate the concepts live in an online meeting. Best regards, Njål Borch and Ingar Arntzen --- Dr. Njål Borch Senior researcher Norut Tromsø, Norway On 7 May 2015 at 11:24, Daniel Davis <ddavis@w3.org> wrote: > Hello all, > > This is to let you know about a discussion within the HTML Working Group > Media Task Force that would benefit from wider input, especially from > people here in the Web and TV Interest Group. > > Initially intended as part of the MSE spec, it's currently a collection > of use cases for alternate content insertion in media [1]. However at > the recent Media Task Force face-to-face meeting [2] it was deemed that > the scope could be more than just MSE. There may even be some cross-over > with the current GGIE work [3] and so before a solution or target spec > is decided on there needs to be more feedback. > > If this topic is of interest to you please could you take a look at the > use cases listed so far and feel free to edit, add or comment based on > your experience and requirements? The face-to-face meeting minutes are a > good place to see the discussion so far: > http://www.w3.org/2015/04/15-html-media-minutes.html#item05 > > Thank you in advance, > Daniel Davis > W3C > > [1] > https://www.w3.org/wiki/HTML/Media_Task_Force/MSE_Ad_Insertion_Use_Cases > [2] https://www.w3.org/wiki/HTML/wg/2015-04-Agenda > [3] https://www.w3.org/2011/webtv/wiki/GGIE_TF > >
Received on Thursday, 7 May 2015 20:32:51 UTC