Re: Proposal from HbbTV

On 2014-09-30, at 12:48 , Silvia Pfeiffer <silviapfeiffer1@gmail.com> wrote:

>> [...]
>> I mean 'in the text track cue list', if not in a different subclass of
>> TextTrack that offers some other data structure. I wouldn't assume that UA
>> rendering can only result in pixels being drawn in the video viewport: for
>> example there could be connections to other display or rendering devices.
> 
> Since text tracks are part of the video element, text track data's
> rendering is restricted to the video viewport's dimensions.

What about for instance rendering TTML - or any other text-based format - subtitles on a Braille "display"? Or by a text-to-speech system (often dubbed "screen reader")?

> If you want them rendered elsewhere, you need to extend the HTML
> specification for that. The only other way to do it is with JavaScript
> and for that you need to expose the content of the text track cues.
> [...]

An accessibility-aware consumer electronics device would typically perform selection and rendering automatically based on the user's preferences. So at least for the CE use cases, the involvement of the UA and the document could be as minimal as being informed which tracks are selected and playing (or will be playing when the user hits "play").

Of course you are looking at it from a slightly different angle. Also - as you pointed out - some things might of course need to be proposed in other W3C groups to fully embrace the CE use cases.

Many thanks and cheers,

  --alexander

Received on Monday, 6 October 2014 10:22:50 UTC