Re: Notify user agent available fragment

Hi Jeroen, all,

On Thu, May 6, 2010 at 2:00 AM, Jeroen Wijering
<jeroen@longtailvideo.com> wrote:
> Dear all,
>
>>>> Incidentally, reaching the HAVE_METADATA state will also be a
>>>> precursor to some of the cases that the MF spec identifies, such as:
>>>>
>>>>
>>>> http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-spec/#processing-protocol-UA-mapped
>>>>
>>>>
>>>> orhttp://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-spec/#processing-protocol-Server-mapped
>>>> . Thus it's not unreasonable to deal with this condition for certain
>>>> usage cases.
>>>
>>> Indeed! And I think we should write this explicitly in the spec as each
>>> time
>>> I explained the work of the Media Fragments WG in a presentation, I get
>>> this
>>> as a question. Who wants to edit the spec to mention this?
>>
>> 5.2.1 states:
>> "This is the case typically where a user agent has already downloaded
>> those parts of a media resource that allow it to do or guess the
>> mapping, e.g. headers or a resource, or an index of a resource."
>>
>> If we want to stay independent of the HTML5 specification, this is an
>> acceptable description of the condition, IMHO. If we want to use HTML5
>> as an example, we can certainly add the note on HAVE_METADATA.
>>
>> Raphael, I think no matter whether it is written in the spec or not,
>> you will always get this questions, since it is a core issue to
>> understand. ;-)
>
> Allow me to elaborate a little more on the HAVE_METADATA state as provided
> in HTML5, since this touches the overall use case I had in mind for my
> initial question: streaming. Today's main streaming platforms (Flash,
> Silverlight, Quicktime) converge towards the practice of requesting small
> fragments of a video over HTTP, seamlessly concatenating them in the player
> to a full video. This practice, as you know, allows for functionalities such
> as live streaming and multi-bitrate delivery leveraging existing HTTP
> infrastructure.
>
> On a request level, a standardization of this functionality seems perfectly
> covered using Media Fragments. On a discovery level, however, relying on the
> user-agent to retrieve the metadata part of a video causes some practical
> issues:
>
> *) The main reason for multi-bitrate delivery (and streaming in general) is
> bandwidth conservation. If a user-agent has to retrieve part of the
> mediafile in order to extract metadata, the opposite would actually be the
> case. Especially with long-form content, metadata headers could grow as
> large as several megabytes. On top of this, the user-agent would actually
> have to request the metadata for every single bitrate in a multi-bitrate
> scenario.
> *) In case of live streaming, technical metadata of the resource may not
> exist in the resource itself. It is typical for live HTTP streaming solution
> to maintain metadata externally, so that the resource itself doesn't need
> updating at two points (head and tail) while the live event is in progress.
>
> All three aforementioned platforms rely on external (XML, M3U8) descriptor
> files that inform the user-agent of the exact file and fragment
> availability, to work around both issues. It seems a standardization of such
> functionality could be a use case for the Media Multitrack API (although it
> currently focusses on accessiblity tracks). I presume my best bet would be
> to raise this question with the team working on the Media Multitrack API? Or
> am I off track here and should such functionality not be considered part of
> the Video on the Web efforts? I could imagine it has previously been
> dismissed as either too specific or unwieldy...

You are actually speaking also to some of the authors of the Media
Multitrack API here.

HTTP adaptive streaming is indeed one challenge that thus far hasn't
got a standard. There are solutions by Microsoft through Smooth
Streaming, by Apple through Live Streaming, and more recently by Adobe
through Adaptive Streaming, but there is indeed no accepted standard.
In fact, there is no solution for Ogg yet, even though there are
existing experiments and current discussions - one of the people
working on this for Ogg is also on this list.

Creating a standard for HTTP adaptive streaming is not on the charter
of this Working Group, which has a clear focus on developing a
specification for media fragment URIs.

I do, however, agree that an activity should be formed. Now, since
this is somewhat of a HTTP protocol issues, it may well be that the
better location for such a work item is the IETF. Or it might be
possible to acquire an extension after the end of our work on media
fragment URIs for this group to work on a standard for HTTP adaptive
streaming. Or it might be necessary to form a new gorup. This is
something that should probably be taken to Philippe, who is the W3C
sponsor (not sure that's the right word) for the Video on the Web
activity, since it is about how to create a new work item (and so I
have added Philippe to the cc list).

I'd certainly be interested in such an activity.

Regards,
Silvia.

Received on Wednesday, 5 May 2010 23:10:00 UTC