W3C home > Mailing lists > Public > public-media-fragment@w3.org > March 2009

Re: Timecodes that are not zero-based

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Sat, 28 Mar 2009 11:05:34 +1100
Message-ID: <2c0e02830903271705p32f471bag80a704ba4ce673e@mail.gmail.com>
To: Conrad Parker <conrad@metadecks.org>
Cc: Media Fragment <public-media-fragment@w3.org>
On Sat, Mar 28, 2009 at 10:55 AM, Conrad Parker <conrad@metadecks.org> wrote:
> 2009/3/28 Silvia Pfeiffer <silviapfeiffer1@gmail.com>:
>>
>> In Annodex/Ogg Skeleton we deal with this situation by having it
>> included in the header of the delivered file. It will say that it
>> starts for offset 12s, but that the display time should be 15s. This
>> will make the video decoder skip the beginning and display 15s as the
>> first timestamp.
>>
>> Other formats probably do not support this approach yet, and then your
>> user agent will indeed need to do this manually as described.
>
> Aye, we designed ogg/skeleton specifically to support this kind of
> usage. It would be interesting to see what other formats that info
> could be applied to.
>
> Perhaps another way to retro-fit it would be to put the
> Presentation-TIme in an HTTP response header. That would be less
> accurate but could be a workaround for other formats.
>
>> What we are doing in Annodex is: we convert a smpte timecode
>> specification to a npt time and then do all the offset calculations in
>> npt/seconds. You can see my time handling for CMML in
>> http://svn.annodex.net/libcmml/trunk/src/cmml_time.c .
>
> not quite :-) that's the default behaviour for the CMML encapsulation,
> but the offsets encoded in Skeleton in general can be in any rational
> base. The data start times for each track can be in the units for that
> track, eg. video frames, audio samples etc.; and the overall
> presentation time for the segment can be specified in an arbitrary
> unit.
>
> Most applications so far (like oggz-chop) just choose milliseconds for
> the presentation time, but that is up to the implementation.

Thanks for clarifying - I indeed didn't mean to imply a "per second"
resolution - CMML basically also uses a millisend resolution, too.
Sorry for being unclear.

The main point is that we will always have a difference in the time
resolution of the access units that we get in encoding (even a
difference for audio and video because of sampling) and the time
resolution through which we want to address the stream (which is in
theory infinite resolution). All we can do is best effort.

Cheers,
Silvia.
Received on Saturday, 28 March 2009 00:17:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 21 September 2011 12:13:32 GMT