Re: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur

Just one point to be careful about. The existing (though experimental) webKitSourceAppend method allows actual video data, supplied in a Javascript ByteArray to be appended to the video element. This is intended to enable full adaptive bitrate support to be implemented in Javascript (Model 3). (This API has a slight variant on our wiki where what is provided is not a ByteArray but a URL and byte range, but the data provided and the principle is the same).

The requirement to seamlessly splice one self-contained video after another is a very different thing. What is provided is a URL for a whole piece of content - which could itself be an adaptive streaming manifest.  We should be careful not to get them confused.

Clearly, in any case where the UA is responsible for controlling downloading then it has to "do the right thing" with respect to resource usage for the old and new videos.

...Mark


On Dec 13, 2011, at 11:46 PM, Mays, David wrote:

I have some reservations about Use Case 9.

"An application developer would like to create a seamless video experience by appending segments or video sources without stopping playback"

Let's assume for a moment I have a piece of media playing, and during its playback I decide that another piece of media is to be appended to it because my ad server finally returned a URI to a post-roll. When my application signals the UA with player.appendVideo('http://someuri') my assumption is that this will cause the UA to begin buffering this other video, so that it is ready to start playing at the moment the first video ends.

Based on that set of assumptions, I would be concerned with the impact of two concurrently downloading pieces of media causing issues for the adaptive heuristics. It's quite conceivable that I have now halved my available bandwidth, which may cause a disruption to the playback experience of the currently playing media.

I imagine different UAs could also come up with different schemes for the timing of such an activity. Maybe some would start buffering immediately upon the call to appendVideo(). Others might wait until near the end of playback of the first video. There's certainly room for creativity there.

The UA may also not know much about the next piece of media until it starts loading it.
 - Is the media playable at all by this UA? (codec, format, etc)
 - How much bandwidth will it require to begin buffering it? (Will the new stream starve my existing stream?)
 - If the new format is different than my current video format, will the two different sets of adaptation heuristics work in conflict with one another?

The bottom line is that I see some practical implementation issues here, and I can imagine resistance to this from the UA creators. This means we need to be absolutely crystal clear about both what this use case is and what it is not.

Jason, could you add some clarity as to the intent of this use case? It may be sufficient to state that while the intent of the use case is for "seamless" playback, a conforming implementation need not guarantee a "perfect" experience.

Thanks,

Dave


________________________________________
From: Lewis, Jason [Jason.Lewis@disney.com]
Sent: Tuesday, December 13, 2011 2:45 AM
To: ÀÌÇöÀç; 'Clarke Stevens'; public-web-and-tv@w3.org<mailto:public-web-and-tv@w3.org>
Subject: Re: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur

Hi, in general I agree with the 3 architecture models as well.
For providing content, I think models 1 & 3 are most important:

Model 1: Reporting & QoS mettrics are critical. When playing HLS in and
HTML5 player, we have no clear view of which bitrates are truly optimal.
Delivering to phones, tablets, and desktops across varying quality wifi or
3G networks can be like throwing darts blindfolded :)

Model 2: Nice to have, but the heuristics of selecting a bitrate based on
bandwidth & client decoding performance are pretty well understood.
Application developers shouldn't have to deal with this in order to
provide content to a customer.

Model 3: Dynamic client-side selection & appending of video segments (or
sources) is critical to present seamless video experiences to viewers.
Application developers capabilities on HTML5 generally lag behind in this
area.

I've also added three new use cases related to these thoughts:
http://www.w3.org/2011/webtv/wiki/MPTF/ADR_Error_Codes#Use_Cases


Thoughts? Thanks, J
--
Jason Lewis
Disney Technology Solutions and Services



On 12/12/11 6:18 PM, "ÀÌÇöÀç" <hj08.lee@lge.com<mailto:hj08.lee@lge.com>> wrote:

>Hi Clarke,
>
>I think the 3 architecture models approach on adaptive streaming you
>proposed is very good.
>At first, rather full media control approach will be easy to discuss
>between companies for deciding what functionalities are needed for video
>tag.
>The utmost goal is surely minimal control approach which video tag object
>will do agreed necessary functions automatically without application
>developers intervention.
>
>Let's hear content providers' and TV manufacturers' voice on what
>functionalities are necessary.
>Content providers and TV manufacturers, please speak up.
>
>Best regards,
>HJ / LG Electronics
>
>-----Original Message-----
>From:
>Sent: ¾øÀ½
>To: Clarke Stevens; public-web-and-tv@w3.org<mailto:public-web-and-tv@w3.org>
>Subject: RE: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur
>
>Hello,
>
>I have a couple of questions on the proposal.
>
>1. segment definition
>Can a definition of segment be added to the proposal? It will help go over
>the below comments.
>
>2. maxLevel (input)
>What is the format of the input. Is it a segement identifier or bandwidth?
>If it is agreeable I would recommend to adopt concept of bandwidth that is
>mapped to the manifest bandwidth. Even though the bandwidth is a
>subjective
>level based on what is reported in the manifest rather than actual it is
>the best form to indicate a max limit. An extra argument could also be
>included indicating what position that this max level should be applied.
>An
>issue for implementers is that simply indicating max level may have
>different effects depending on how the buffer is handled. If this can be
>done at a future position it will make for a smoother transition.
>
>3. callback change in representation
>I am missing a callback reporting when the representation has changed. It
>might be what is called segment in the proposal but I am not sure. This
>callback is only reported when the "bandwidth" has changed. The "position"
>at which this change occurs should also be included since it may occur in
>a
>future point.
>
>Regards,
>JanL
>
>-----Original Message-----
>From: Clarke Stevens [mailto:C.Stevens@cablelabs.com]
>Sent: den 9 december 2011 20:01
>To: public-web-and-tv@w3.org<mailto:public-web-and-tv@w3.org>
>Subject: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur
>
>Please take a look at the Wiki page for Adaptive Bit Rate.
>
>http://www.w3.org/2011/webtv/wiki/MPTF/ADR_Error_Codes

>
>I'd like to have a good list of parameters, and errors (for model 1 in
>particular) that we can provide for MPTF folks to review over the next
>view
>days. You can make suggested edits directly, or post your ideas to the
>reflector.
>
>Also, please make sure we are in agreement on the definitions of the
>architectural models.
>
>Thanks,
>-Clarke
>
>
>
>

Received on Wednesday, 14 December 2011 16:59:34 UTC