W3C home > Mailing lists > Public > public-html-bugzilla@w3.org > October 2011

[Bug 14104] Video: Streaming text support in <track> element

From: <bugzilla@jessica.w3.org>
Date: Sat, 01 Oct 2011 08:42:55 +0000
To: public-html-bugzilla@w3.org
Message-Id: <E1R9v9r-0004O9-2B@jessica.w3.org>
http://www.w3.org/Bugs/Public/show_bug.cgi?id=14104

--- Comment #11 from Silvia Pfeiffer <silviapfeiffer1@gmail.com> 2011-10-01 08:42:49 UTC ---
(In reply to comment #10)
> The JS API was actually designed in part for this purpose, so you could stream
> cues to add to through the API.

I appreciate that. I expect, though, that there will be two ways of dealing
with "streaming text" - one that will be fully JS based, and one that will be
file-based.


> Note that doing it by constantly reloading the src="" wouldn't work, for the
> reasons given in comment 8 paragraph 3.

That can be overcome by providing the cues always wrt the video's start-time
and giving the page information about how much time has passed since the
video's original start time.


(In reply to comment #7)
> AFAICT, initialTime is for an initial offset that is actually on the timeline,
> e.g. given by a media fragment URI, not for a position that is before the
> stream begun.

So initialTime and a media fragment URI's offset time are identical - I would
think that we don't need initialTime then, since we can get it out of the URI.


> There is also startOffsetTime, but for that to be usable the
> captions themselves would also need to have a start date.

Yeah, that maps the video's zero time to a date, which isn't quite what we
need.

What we need is basically a secondsMissed, which is the number of seconds that
have passed since the start of the stream which the viewer has missed when
joining this stream live. Given that the times in the WebVTT file would be
relative to that original start time, you can calculate when the cues would
need to be presented.

> In any case, do you mean that the browser will natively sync the captions of
> live streams to make up for the timeline difference, or that scripts will be
> able to do so?

Being able to use the native display would be the goal.

For scripts to be able to do so, they need the secondsMissed information, too,
which they would need to get from a data-* attribute from the server. Then
scripts would be able to do a custom caption display.

So, I guess what we would need to change to support this use case are the
following:
* introduce a secondsMissed attribute for live streams
* introduce a reload mechanism for <track> elements
* introduce a "next" end time keyword in WebVTT

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
Received on Saturday, 1 October 2011 08:42:57 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:02:05 UTC