Re: Overview of Media Technologies for the Web

Hi Ingar,

Le 02/10/2017 à 12:07, Ingar Mæhlum Arntzen a écrit :
> Francois,
>
> comments inline
>
>
>         Some comments - mostly related to the Multi-device Timing CG.
>
>         - Media Control. One aspect of media that could additionally be
>         addressed here relates to flexibility. For example, can a media
>         experience be controlled from two devices or by multiple users? Can
>         control easily be handed over from one device to another? In a
>         group,
>         can media control be symmetric (everybody can control) or asymmetric
>         (only a select few can control). This would be an area were
>         exploratory
>         work is covered by the Multi-device Timing CG.
>
>
>     The Multi-Device Timing CG is mentioned in the exploratory work
>     section, but the description can certainly be improved. Goal is to
>     keep the text short for each feature to ease maintenance. Would you
>     have some concrete text to propose by any chance? :)
>
>
> I'll see if I can come up with a few lines :)

Thanks, I captured the idea in a GitHub issue in the meantime:
https://github.com/w3c/web-roadmaps/issues/96


>         - Media Rendering vs Content Orchestration. The distinction between
>         these two categories at first seems a bit unclear to me, with
>         distributed playback being a theme of both.
>         The way I read it Media Rendering is focused on standalone playback
>         components, plus mechanisms for piping media output to another
>         device.
>         Content orchestration seems to be about temporal coordination of
>         independent playback components (be it within a single web page or
>         across different devices)
>         Would this be correct?
>
>
>     That is correct, and it is imperfect indeed. I don't think there
>     exists a clean hierarchy to divide technologies, and some of them
>     are listed in different places. That's not a bug. I think I'm going
>     to add cross-links between pages (for instance, the Media Rendering
>     page mentions the Timing Object spec, but the Content Orchestration
>     page will provide more details, so it would make sense to link to
>     that page from the Media Rendering one)
>
>
> Sounds good. As these two themes do have similarities, a note
> (cross-link) pointing out subtle differences may be very helpful.

I did not add cross-links right now, essentially because the framework 
does not yet generate fragment anchors that I could use, but please bear 
with me ;)
https://github.com/w3c/web-roadmaps/issues/95


>         - Content Orchestration. Could it rather be named Media
>         Orchestration?
>         Within MPEG there is already an on-going initiative [1] using
>         this term.
>
>         [1]
>         https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration
>         <https://mpeg.chiariglione.org/standards/mpeg-b/media-orchestration>
>
>
>     I'm trying to remember how we ended up with Content Orchestration. I
>     suppose the idea was that you'll want to orchestrate media and
>     non-media content at the same time. I'm fine renaming the section to
>     Media Orchestration though. Also, this would mean we'll have "Media"
>     as common prefix to all page titles, which seems good.
>
>
> Excellent.

Renaming done.


>         - Media Capture. Please include also the aspect of timestamp
>         accuracy in
>         captured content. This is of great importance to for any multi-angle
>         media productions involving video and audio, and more generally all
>         media production where captured media should be precisely
>         relateable to
>         a real-world clock (epoch).
>
>         Capturing devices may provide a timestamp, but it is rarely
>         known when
>         exactly this timestamp was taken (i.e. sometime after requesting an
>         image and before the image is available in JS - this could be
>         hundreds
>         of milliseconds). Equally important, upstream delays in sensor
>         processing pipeline must be known, so that timestamps can be
>         compensated. This is analogous to work done in the Web Audio API
>         where
>         downstream delays are exposed for media output. As delays may
>         not always
>         be known, techniques for measurements and calibrations should
>         likely be
>         explored.
>         This is addressed as exploratory work within the Multi-device
>         Timing CG.
>
>
>     Isn't it what the latency media track capability would provide?
>     http://www.w3.org/TR/mediacapture-streams/#def-constraint-latency
>     <http://www.w3.org/TR/mediacapture-streams/#def-constraint-latency>
>
>
>
>     I'm fine adding an exploratory work section that mentions
>     discussions on timestamp accuracy in captured content. Concrete text
>     welcome as well ;)
>
>
> Excellent. Good thing this is already in specs for media streams. I
> couldn't see it mentioned in the media capture page though. Also, the
> relevance isn't Iimited to media streams, so I thought it be worthy of a
> mention as part of the media capture description.

I added the Media Capture and Streams specification to the Media 
Orchestration page to mention the latency property:
http://w3c.github.io/web-roadmaps/media/synchronized.html

I added some more generic text to the media capture page (and will add a 
cross-link once I have the required fragment IDs)

Thanks,
Francois.

Received on Wednesday, 11 October 2017 15:19:23 UTC