W3C home > Mailing lists > Public > public-media-fragment@w3.org > October 2008

Re: video use-case

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Tue, 7 Oct 2008 11:13:37 +1100
Message-ID: <2c0e02830810061713taccffeledddfc46bc38aff1@mail.gmail.com>
To: "Dave Singer" <singer@apple.com>
Cc: "Media Fragment" <public-media-fragment@w3.org>

On Tue, Oct 7, 2008 at 11:04 AM, Dave Singer <singer@apple.com> wrote:
> At 10:54  +1100 7/10/08, Silvia Pfeiffer wrote:
>> On Tue, Oct 7, 2008 at 10:26 AM, Dave Singer <singer@apple.com> wrote:
>>>  At 9:16  +1100 7/10/08, Silvia Pfeiffer wrote:
>>>>>
>>>>>  For example, in a media file that has an index and is
>>>>>  in time-order, a user-agent wanting a time-based subset may be able to
>>>>>  use
>>>>>  byte-range requests to get the index, and then the part(s) of the file
>>>>>  containing the desired media. (We do this today for MP4 and MOV
>>>>> files).
>>>>
>>>>  Yes, byte-ranges are possible. However, the Web server is the only
>>>>  component in the system that knows about converting a time offset to a
>>>>  byte range. Therefore, you have to first communicate with a URI
>>>>  reference to the server which subsegment you would like to have, the
>>>>  server can then convert that to byte ranges, return to the UA which
>>>>  byte range he has to request, and then we can do a normal byte-range
>>>>  request on the full URI.
>>>>
>>>>  When you say that you do this today for MP4 and MOV files, how do you
>>>>  communicate the fragment to the Web server?
>>>
>>>  MP4 and MOV files have tables in the moov atom which give complete time
>>> and
>>>  byte-offset indexing for every video and audio frame.  Atoms are also
>>> sized.
>>>  You can gamble, ask for 1K at the start of the file; if it's laid out
>>> for
>>>  incremental playback, the moov atom will be the first or second atom;
>>>  you've got its size now, and can download the rest.  If it wasn't, you
>>> have
>>>  the size of those atoms and can skip past them and ask for the next.
>>>  Once
>>>  you have the moov atom, you know exactly what bytes you need to go
>>> anywhere
>>>  in time (and yes, even sync points are marked, so you know how far to
>>> back
>>>  up if someone does a random access).  If video and audio are interleaved
>>> in
>>>  time order, the data you need will be all contiguous.
>>
>> This still does not solve the client-server problem. Say, a UA wants
>> to play back a MOV file from sec 45-88. The UA does not know how to
>> map that to a byte offset and therefore to a byte-range request.
>
> that's what i telling you;  it does.  it can find the index, typically in
> one request.  given the index, it can figure it out...

OK, I may be misunderstanding something. For a UA to ask for byte
ranges when it only knows about time ranges, it would need a default
mapping for a media type from time to byte. Are you saying that
independent of the sampling rate and framerate, you will always have a
direct mapping for any MOV file from time to byte offset? Can you
explain?

Silvia.
Received on Tuesday, 7 October 2008 00:14:18 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 21 September 2011 12:13:31 GMT