W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Mon, 24 May 2010 21:29:22 +1000
Message-ID: <AANLkTinOU3Q8RrV6Q4ns5ACuko572mGoN_lP3LRh4Lhp@mail.gmail.com>
On Mon, May 24, 2010 at 4:14 PM, Robert O'Callahan <robert at ocallahan.org> wrote:
> On Mon, May 24, 2010 at 5:54 PM, Philip J?genstedt <philipj at opera.com>
> wrote:
>> So from this I gather that either:
>> 1. initialTime is always 0
>> or
>> 2. duration is not the duration of resource, but the time at the end.
> I wouldn't say that. If you can seek backwards to before the initial time,
> then clearly 'duration' really is still the duration, you just didn't start
> at the beginning. Same goes even if you can't seek backwards; e.g. "this
> live stream is an hour long and you have started 20 minutes into it".
>> This seems to be what is already in the spec. Instead of guessing what
>> everyone means, here's what I'd want:
>> 1. let currentTime always start at 0, regardless of what the timestamps or
>> other metadata of the media resource says.
>> 2. let currentTime always end at duration.
>> 3. expose an offset from 0 in startTime or a renamed attribute for cases
>> like live streaming so that the client can e.g. sync slides.
>> The difference from what the spec says is that the concept of "earliest
>> possible position" is dropped.
> I think the current spec allows you to seek backwards from the starting
> point. So would my proposal. Would yours? Would you allow 'seekable' to
> contain negative times? I think it's slightly simpler to allow currentTime
> to start at a non-zero position than to allow negative times and to support
> the offset in your point 3.
> I also think your point 3 would be slightly harder to spec. I'm not sure
> what you'd say.
> Rob

I am utterly confused now. I think we need a picture. So, let me give
this a shot.

This is the streaming video resource:


(1) is when the video started getting transmitted
(2) is where the UA joined in and started playing back from
(3) is up to where the UA has played back
(4) is up to where the UA has data buffered
(5) is when the video will end (which is most probably not known)
Let's further say the video started streaming on 1st January 2010 at 10am.

The video's timeline is:
(1) => 0 sec
(2) => t1 sec with t1 >= 0
(3) => t2 sec with t2 >= t1
(4) => t3 sec with t3 >= t2
(5) => t4 sec with t4 >= t3

I am assuming what is displayed in the video player is exactly this
video's timeline, i.e. t1 at (2), t2 at (3), and t4 at (5). Now, the
position (1) is not visible in the video player? Or is it visible and
playback starts in the 'controls' with an offset? In that latter case,
we can jump back to the beginning in the interface, in the earlier
case, we can't except for maybe with media fragment URIs. But I quite
like the representation from 0 with an actual playback start of t1.

Here's how I've understood it would work with the attributes:
* currentTime is the video's timeline as described, so since we are at
offset (3), currentTime = t2.
* initialTime = t1, namely the offset at where the video playback started.
* dateTime = 2010-01-01T10:00:00.000

Incidentally, the current concept of startTime has had me utterly
confused. I wonder if it meant that seeking to a time before t1 wasn't
possible. I don't know why such a concept would be necessary unless a
live stream wouldn't be seekable before the current time. But maybe
that is much more easily represented by "seekable".

Received on Monday, 24 May 2010 04:29:22 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC