W3C home > Mailing lists > Public > public-media-fragment@w3.org > April 2009

Re: Review of section 7

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Wed, 15 Apr 2009 00:19:37 +1000
Message-ID: <2c0e02830904140719i66c54e8cx8e81943cb5281837@mail.gmail.com>
To: Yves Lafon <ylafon@w3.org>
Cc: Media Fragment <public-media-fragment@w3.org>
On Tue, Apr 14, 2009 at 11:49 PM, Yves Lafon <ylafon@w3.org> wrote:
> On Tue, 14 Apr 2009, Silvia Pfeiffer wrote:
>
>>>> and MPEG-1 for the temporal dimension. I don't think this is the case
>>>> for many other formats. Even for MP3 an MPEG-1, I don't think they are
>>>> capable of storing that they are extracted from a parent resource at a
>>>> certain in and out time.
>>>
>>> Ok, so you want to rule out completely extracting a fragment from a
>>> resource.
>>
>> No, not at all. I am just saying that using the 1-GET request method
>> implies that most - if not all - fragmented media resources will not
>> be cachable in Web proxies.
>
> You mean they will not be cached by most Web proxies, not "not cachable",
> no?

I meant "not cacheable" as a fragment of the full resource. Sure you
can cache the fragments themselves. But my whole point was always that
you create n copies.


>> Why are you talking about reverse proxies? Why not just have that
>> functionality in the Web server itself? In fact, I was not referring
>> to a reverse Web proxy.
>
> Between a heavy Web server that is also doing cache, dynamic content etc...
> and a more simple server with a reverse proxy, the latter usually wins in
> term of performance.

Yes, this is all possible (though I would probably prefer two separate
servers for static and dynamic content rather than moving Web server
functionality onto a reverse proxy).

But I'm not actually talking about optimising the performance of the
Web server. I am talking about optimising the bandwidth use in the
pipes between the server and the clients.


>> I was talking about caching Web proxies. So let's say the 5h long
>> video takes up 3GB. Across a 100MBit pipe that we can flood with the
>> video, it will take 4min to get that video into the Web proxy, where
>> it then does the extraction and transfers the last 2 min. I don't
>> think that is acceptable. That is what I meant by "defeating the
>> purpose of fragmenting" because the idea of fragment delivery is to
>> avoid the long delay required by downloading the full resource (even
>> if this delay is smaller since it's not across a low-bandwidth
>> consumer link, but a higher bandwidth Internet link).
>
> Well, such cache won't do that for sure, they will probably cache the
> fragment, and if it is a general purpose cache, will flush it for a
> subsequent request. (and if it is a busy cache, it may be flushed way before
> another access to that video is made).

Caching exists for a reason. The idea is to make it possible to cache
multiple fragments without overloading the cache. That's why I am so
adamant at not making copies of the data by caching fragments of the
same resource as separate files, but rather want them to provide
different byte ranges to one-and-the-same resource.


>>> So you propose that every client should do
>>> GET http://example.com/fragf2f.mp4 HTTP/1.1
>>> Accept: video/*
>>> Range: bytes=0-65535 (or anything big enough to get the data headers)
>>>
>>> then based on what is retrieved, do another partial GET based on what the
>>> client mapped between the seconds and the bytes, right ?
>>
>> It depends how far we want to take this. If we decide the best means
>> to deal with such special formats is to always do a request on the
>> beginning of the file to retrieve the index part before we do anything
>> else, that's possible. I was thinking more about the possibility for a
>> User Agent to do an internal improvement once it had received the
>> index in a previous retrieval request. Any consecutive media fragment
>> URI request can use this knowledge to avoid the need to do a seconds
>> range request, but to directly do a bytes range request.
>
> How about something along those lines.
> a GET of the media header (so a few bytes, depending on how much is enough),
> and the server answers with that + one or more Link: headers linking to
> different mappings (time to bytes is an example) or at least a resolver URI,
> linking to the sub-resource, to parent etc...

Why does the server need to link to anything else, if the header
already does the fragment->bytes resolving? I don't understand what
you're suggesting...


>>> It should already work now without changing anything, without using a
>>> X-Accept-Range-Redirect header.
>>
>> You're probably right. Ultimately it is an optimisation that the User
>> Agent can make and doesn't need to be included in this standard. We
>> can however point out that for some formats there is the possibility
>> to identify the byte range mapping from their header and that a User
>> Agent can optimise their approach for this.
>
> It's the same for caches, the strategy to cache or not is dictated by
> implementation choice and configuration choices. So we shouldn't say
> anything, implementers are smart enough to realize that one choice will fly
> or not :)

Maybe.... a hint could help, IMHO.


Cheers,
Silvia.
Received on Tuesday, 14 April 2009 14:20:30 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 21 September 2011 12:13:33 GMT