W3C home > Mailing lists > Public > public-media-fragment@w3.org > November 2008

Re: Squid experts

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Mon, 3 Nov 2008 22:04:52 +1100
Message-ID: <2c0e02830811030304n64140e8bn64079b8e4e886603@mail.gmail.com>
To: "Yves Lafon" <ylafon@w3.org>
Cc: "Media Fragment" <public-media-fragment@w3.org>

Hi Yves,

On Mon, Nov 3, 2008 at 9:31 PM, Yves Lafon <ylafon@w3.org> wrote:
> On Sat, 1 Nov 2008, Silvia Pfeiffer wrote:
>> One technical point that was made is that doing time ranges in proxies
>> may be really difficult since time is inherently continuous and so the
>> continuation from e.g. second 5 to second 6 may not easily be storable
>> in a 2-way handshake in the proxy.
> Not for a general proxy, but it makes the case for proxies with a specific
> module to handle such beast.

The issue is that for any codec it will be a problem to identify where
"second 5" ends and where "second 6" starts. Even a proxy that
understands time and codecs cannot be certain that when it receives a
packet that goes till second 5 and one that starts at second 6 it can
just concatenate them to make a continuous stream. Such concatenation
only works on the byte level. It therefore needs to be told not just
the time range, but also the byte range that the data maps to.

> That said, we have different axes of selection,
> and it doesn't fit well the range model.
> I was wondering if language selection could be done using Accept-Language,
> in the case you have different language tracks, but in that case you need to
> identify first class URIs for the different language-based variants.

When you mean language selection, you talk more generally about
selecting different tracks, right? This could be both for audio and
for annotations. As for video, we could also have recordings from
different angles that could be selected or other tracks. Solving the
language selection only solves one part of the track selection

> We need to discuss that a bit deeper, do we really need to identify the
> video+fr soundtrack as a fragment?

I don't understand "video+fr soundtrack"... what do you mean?

>> Instead there was a suggestion to create a codec-independent media
>> resource description format that would be a companion format for media
>> resources and could be downloaded by a Web client before asking for
>> any media content. With that, the Web client would easily be able to
>> construct byte range requests from time range requests and could thus
>> fully control the download. This would also mean that Web proxies
>> would not require any changes. It's an interesting idea and I would
>> want to discuss this in particular with Davy. Can such a format
>> represent all of the following structural elements of a media
>> resource:
>> * time fragments
>> * spatial fragments
>> * tracks
>> * named fragments.
> Well, you have byte ranges, but no headers, no metadata. And carrying part
> of the missing payload in headers is a big no.

Can you explain this further? I don't quite understand what is the big
no and which missing payload you're seeing to be put in which headers
(HTTP headers?).

Received on Monday, 3 November 2008 11:16:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:52:41 UTC