Re: Inband styling (was Re: Evidence of 'Wide Review' needed for VTT)



On 23/10/2015 16:01, "singer@apple.com on behalf of David Singer"
<singer@apple.com> wrote:

>
>> On Oct 22, 2015, at 13:36 , Philip Jägenstedt <philipj@opera.com> wrote:
>> 
>> 
>> Do you have a pointer to such a never-ending WebVTT file deployed on
>> the public web? I honestly didn't think they would exist yet.
>
>Apple’s HLS supports VTT streams natively, and one can tune in at any
>point, and they can go on ‘indefinitely’.  (I think DASH people are also
>able to do the same.)
>
>> 
>> To be pendantic, the reason that never-ending WebVTT files don't work
>> in browsers isn't because of the Streams API, but because the media
>> element's readyState cannot reach HAVE_FUTURE_DATA until the text
>> tracks are ready:
>> 
>>https://html.spec.whatwg.org/multipage/embedded-content.html#the-text-tra

>>cks-are-ready
>> 
>> This is what the spec bug is about, some mechanism to unblock
>> readyState before text track parsing has finished:
>> https://www.w3.org/Bugs/Public/show_bug.cgi?id=18029

>> 
>> Anyway, letting the parser discard style blocks after any cues until
>> we've figured out the live streaming issues is OK with me. However,
>> let's spell out the implications of keeping this restriction for live
>> streams: If you don't know all of the style up front, your only
>> recourse is to add a new text track at the point where new style is
>> needed. This will involve scripts, at which point handling multiple
>> WebVTT tracks will compare unfavorably with just using a WebSocket
>> connection to deliver cues and style using a custom syntax.
>
>I agree it’s worth thinking about a good solution.
>
>> On Oct 22, 2015, at 15:51 , Nigel Megitt <nigel.megitt@bbc.co.uk> wrote:
>> 
>> It would also be possible to take the same approach with VTT as we have
>> taken with TTML, which is that you have a sequence of independent
>> documents each of which contains the styling etc needed to display
>>itself,
>> for whatever time period applies.
>
>That gets us back to the problem I cited earlier — the size of this
>segment determines the minimum streaming segment size and hence delay;
>it’s not very flexible. Also, finding ‘clean’ boundaries can be hard
>(places to break into independent documents).

We should have a chat about these constraints more if we can, perhaps next
week - I'd like to understand your perspective better on this. Right now I
think they're 'nice to haves': not addressing them has no impact on the
end user and also keeps the interests of authors, encoders and packagers
separate, which is a good thing.

Nigel


>
>David Singer
>Manager, Software Standards, Apple Inc.
>
>

Received on Friday, 23 October 2015 15:12:36 UTC