RE: Proposal from HbbTV

Sylvia,

I have one new reason why exposing DVB subtitles, teletext subtitles, (etc) as TextTracks may be better than creating synthetic VideoTracks representing video with burnt in subtitles.

Future scalability (or backwards compatibility if you prefer).

If someone had a good enough reason to define a DVBSubtitleCue class in the future. if the DVB subtitle tracks were exposed as TextTracks today, HTML written to work with today's spec could also work with that future spec without needing to do anything special.

I'm not sure that there would be enough interest to define a DVBSubtitleCue class but there could easily be enough interest to define a TTMLCue.

Jon
________________________________________
From: Silvia Pfeiffer [silviapfeiffer1@gmail.com]
Sent: 12 October 2014 12:06
To: Alexander Adolf
Cc: Nigel Megitt; W3C Inband Tracks Reflector; Jon Piesing
Subject: Re: Proposal from HbbTV

On Sun, Oct 12, 2014 at 5:39 PM, Silvia Pfeiffer
<silviapfeiffer1@gmail.com> wrote:
> On Mon, Oct 6, 2014 at 9:22 PM, Alexander Adolf
> <alexander.adolf@condition-alpha.com> wrote:
>>
>> On 2014-09-30, at 12:48 , Silvia Pfeiffer <silviapfeiffer1@gmail.com> wrote:
>>
>>>> [...]
>>>> I mean 'in the text track cue list', if not in a different subclass of
>>>> TextTrack that offers some other data structure. I wouldn't assume that UA
>>>> rendering can only result in pixels being drawn in the video viewport: for
>>>> example there could be connections to other display or rendering devices.
>>>
>>> Since text tracks are part of the video element, text track data's
>>> rendering is restricted to the video viewport's dimensions.
>>
>> What about for instance rendering TTML - or any other text-based format - subtitles on a Braille "display"? Or by a text-to-speech system (often dubbed "screen reader")?
>
>
> These require no part of the browser window to be rendered, so are
> essentially also still within the "video viewport".
>
>
>>> If you want them rendered elsewhere, you need to extend the HTML
>>> specification for that. The only other way to do it is with JavaScript
>>> and for that you need to expose the content of the text track cues.
>>> [...]
>>
>> An accessibility-aware consumer electronics device would typically perform selection and rendering automatically based on the user's preferences. So at least for the CE use cases, the involvement of the UA and the document could be as minimal as being informed which tracks are selected and playing (or will be playing when the user hits "play").
>
>
> In this case, regarding the caption tracks as burnt-in tracks works fine.
>
>
>> Of course you are looking at it from a slightly different angle.
>
>
> The relevant angle here is the Web, since we're in the W3C. When you
> use a Web browser or a Web browser's rendering engine to render video,
> you have to ascertain that it still fits within the limitations of the
> Web platform and its technologies. That's all really.
>
>
>> Also - as you pointed out - some things might of course need to be proposed in other W3C groups to fully embrace the CE use cases.
>
> I'll start the discussion thread around the HTML specification to get
> some opinions from others.

I started the discussion here:
http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Oct/0145.html
.

If we don't get much of a reaction there, we'll try the HTML WG (which
is a bit busy with REC right now, so I asked on the WHATWG first).

Regards,
Silvia.

Received on Sunday, 12 October 2014 10:37:32 UTC