RE: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur

Can¡¯t RTSP help control stream switching? Should we leave control and some error messaging to RTSP and RTCP respectively? Should HTML5 just support these protocols or should these be integrated/absorbed or just extended in HTML5?

It seems to me that the issue of bandwidth being compromised by multiple streams coming in at the same time could be handled in multiple ways without leaving the stream reception completely up to chance:

¡¤        Future content could be pre-loaded and cached as bandwidth permits

¡¤        The next content object could be started just before the edit point to minimize the impact of bandwidth reduction to the overall presentation. At worst, quality may degrade somewhat just before and immediately after the edit point, for about a second or 2. Content could be produced to compensate for this effect. Who cares if a cross-fade or dissolve gets grainy for a second on a smart phone, really.

¡¤        Streams could be stitched together upstream from the player in a node between the server and the player, as part of a CDN or like function or in the server itself. The player could send the sequence request to this node/server in advance of the edit point. This could act sort of like a server-side playlist which is being written dynamically by the client app.

Thanks!
-Paul

Q me<qto://talk/pg2483>


From: Mays, David [mailto:David_Mays@Comcast.com]
Sent: Wednesday, December 14, 2011 2:47 AM
To: Lewis, Jason; ???; 'Clarke Stevens'; public-web-and-tv@w3.org
Subject: RE: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur

I have some reservations about Use Case 9.

"An application developer would like to create a seamless video experience by appending segments or video sources without stopping playback"

Let's assume for a moment I have a piece of media playing, and during its playback I decide that another piece of media is to be appended to it because my ad server finally returned a URI to a post-roll. When my application signals the UA with player.appendVideo('http://someuri') my assumption is that this will cause the UA to begin buffering this other video, so that it is ready to start playing at the moment the first video ends.

Based on that set of assumptions, I would be concerned with the impact of two concurrently downloading pieces of media causing issues for the adaptive heuristics. It's quite conceivable that I have now halved my available bandwidth, which may cause a disruption to the playback experience of the currently playing media.

I imagine different UAs could also come up with different schemes for the timing of such an activity. Maybe some would start buffering immediately upon the call to appendVideo(). Others might wait until near the end of playback of the first video. There's certainly room for creativity there.

The UA may also not know much about the next piece of media until it starts loading it.
 - Is the media playable at all by this UA? (codec, format, etc)
 - How much bandwidth will it require to begin buffering it? (Will the new stream starve my existing stream?)
 - If the new format is different than my current video format, will the two different sets of adaptation heuristics work in conflict with one another?
The bottom line is that I see some practical implementation issues here, and I can imagine resistance to this from the UA creators. This means we need to be absolutely crystal clear about both what this use case is and what it is not.

Jason, could you add some clarity as to the intent of this use case? It may be sufficient to state that while the intent of the use case is for "seamless" playback, a conforming implementation need not guarantee a "perfect" experience.

Thanks,

Dave


________________________________________
From: Lewis, Jason [Jason.Lewis@disney.com]
Sent: Tuesday, December 13, 2011 2:45 AM
To: ÀÌÇöÀç; 'Clarke Stevens'; public-web-and-tv@w3.org
Subject: Re: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur

Hi, in general I agree with the 3 architecture models as well.
For providing content, I think models 1 & 3 are most important:

Model 1: Reporting & QoS mettrics are critical. When playing HLS in and
HTML5 player, we have no clear view of which bitrates are truly optimal.
Delivering to phones, tablets, and desktops across varying quality wifi or
3G networks can be like throwing darts blindfolded :)

Model 2: Nice to have, but the heuristics of selecting a bitrate based on
bandwidth & client decoding performance are pretty well understood.
Application developers shouldn't have to deal with this in order to
provide content to a customer.

Model 3: Dynamic client-side selection & appending of video segments (or
sources) is critical to present seamless video experiences to viewers.
Application developers capabilities on HTML5 generally lag behind in this
area.

I've also added three new use cases related to these thoughts:
http://www.w3.org/2011/webtv/wiki/MPTF/ADR_Error_Codes#Use_Cases


Thoughts? Thanks, J
--
Jason Lewis
Disney Technology Solutions and Services



On 12/12/11 6:18 PM, "ÀÌÇöÀç" <hj08.lee@lge.com> wrote:

>Hi Clarke,
>
>I think the 3 architecture models approach on adaptive streaming you
>proposed is very good.
>At first, rather full media control approach will be easy to discuss
>between companies for deciding what functionalities are needed for video
>tag.
>The utmost goal is surely minimal control approach which video tag object
>will do agreed necessary functions automatically without application
>developers intervention.
>
>Let's hear content providers' and TV manufacturers' voice on what
>functionalities are necessary.
>Content providers and TV manufacturers, please speak up.
>
>Best regards,
>HJ / LG Electronics
>
>-----Original Message-----
>From:
>Sent: ¾øÀ½
>To: Clarke Stevens; public-web-and-tv@w3.org
>Subject: RE: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur
>
>Hello,
>
>I have a couple of questions on the proposal.
>
>1. segment definition
>Can a definition of segment be added to the proposal? It will help go over
>the below comments.
>
>2. maxLevel (input)
>What is the format of the input. Is it a segement identifier or bandwidth?
>If it is agreeable I would recommend to adopt concept of bandwidth that is
>mapped to the manifest bandwidth. Even though the bandwidth is a
>subjective
>level based on what is reported in the manifest rather than actual it is
>the best form to indicate a max limit. An extra argument could also be
>included indicating what position that this max level should be applied.
>An
>issue for implementers is that simply indicating max level may have
>different effects depending on how the buffer is handled. If this can be
>done at a future position it will make for a smoother transition.
>
>3. callback change in representation
>I am missing a callback reporting when the representation has changed. It
>might be what is called segment in the proposal but I am not sure. This
>callback is only reported when the "bandwidth" has changed. The "position"
>at which this change occurs should also be included since it may occur in
>a
>future point.
>
>Regards,
>JanL
>
>-----Original Message-----
>From: Clarke Stevens [mailto:C.Stevens@cablelabs.com]
>Sent: den 9 december 2011 20:01
>To: public-web-and-tv@w3.org
>Subject: [MEDIA_PIPELINE_TF] Adaptive Bit Rate Architectur
>
>Please take a look at the Wiki page for Adaptive Bit Rate.
>
>http://www.w3.org/2011/webtv/wiki/MPTF/ADR_Error_Codes

>
>I'd like to have a good list of parameters, and errors (for model 1 in
>particular) that we can provide for MPTF folks to review over the next
>view
>days. You can make suggested edits directly, or post your ideas to the
>reflector.
>
>Also, please make sure we are in agreement on the definitions of the
>architectural models.
>
>Thanks,
>-Clarke
>
>
>
>

Received on Thursday, 15 December 2011 17:30:00 UTC