W3C home > Mailing lists > Public > public-html-a11y@w3.org > July 2010

Re: Media--Technical Implications of Our User Requirements

From: Philip Jägenstedt <philipj@opera.com>
Date: Mon, 19 Jul 2010 16:41:44 +0200
To: public-html-a11y@w3.org
Message-ID: <op.vf3hruy7atwj1d@philip-pc>
Comments inline below, snipped the rest:

On Wed, 14 Jul 2010 05:51:55 +0200, Janina Sajka <janina@rednote.net>  

>           + 2.2 Texted Audio Description
> Text content with the ability to contain semantic and style
> instructions.
> Multiple documents may be present to support texted audio description in
> various languages, e.g. EN, FR, DE, JP, etc, or to support multiple
> levels of description.

What semantics and style are required for texted audio descriptions,  
specifically? What does "levels of description" mean here?

>           + 2.5 Content Navigation by Content Structure
> A structured data file.
> NOTE: Data in this file is used to synchronize all media representations
> available for a given content publication, i.e. whatever audio, video,  
> and
> text document--default and alternative--versions may be provided.

Couldn't the structure be given as chapters of the media resource itself,  
or simply as a table of contents in the HTML markup itself, with links  
using Media Fragment URIs to link to different time offsets?

>           + 2.6 Captioning
> Text content with the ability to contain hyperlinks, and semantic and  
> style
> instructions.
> QUESTION: Are subtitles separate documents? Or are they combined with  
> captions
> in a single document, in which case multiple documents may be present to
> support subtitles and captions in various languages, e.g. EN, FR, DE,  
> JP, etc.

Given that hyperlinks don't exist in any mainstream captioning software  
(that I know of), it can hardly be a requirement unless virtually all  
existing software is insufficient. Personally, I'm not thrilled by the  
potential user experience: seeing a link in the captions, moving the mouse  
towards it, only to have it disappear before clicking, possibly  
accidentally clicking a link from the following caption. I think links to  
related content would be better presented alongside the video, not as part  
of the captions.

>           + 2.8 Sign Translation
> A video "track."
> Multiple video tracks may be present to support sign translation in
> various signing languages, e.g. ASL, BSL, NZSL, etc. Note that the
> example signing languages given here are all translations of English.

Isn't it also the case that a sign translation track must be decoded and  
rendered on top of the main video track? That makes quite a big difference  
in terms of implementation.

>           + 2.9 Transcripts
> Text content with the ability to contain semantic and style
> instructions.

I.e. an HTML document? Transcripts are possible with today's technology,  

>          + 3.1 Access to interactive controls / menus
> An API providing access to:
> Stop/Start
> Pause

We have play() and pause(), but no stop() because it's almost the same  
thing as pause().

> Fast Forward and Rewind (time based)
> time-scale modification control


> volume (for each available audio track)


> pan location (for each available audio track)
> pitch-shift control

There's no API for these yet.

> audio filters

Like Eric, I'm a bit skeptical to this. Why do we need it?

> Next and Previous (structural navigation)
> Granularity Adjustment Control (Structural Navigation)

I don't really understand what this is. Would the API be something like  

> Viewport content selection, on screen location and sizing control

Layout is controlled by CSS, other than fullscreen mode we can't have a  
user changing that.

> Font selection, foreground/background color, bold, etc

Agree, but as a part of User CSS (no UI).

> configuration/selection
> Extended descriptions and extended captions configuration/control
> Ancillary content configuration/control

I don't know what these last 3 really mean in practice.

I don't think we should document requirements that are already fulfilled  
(those at the top).

>           + 3.5 Discovery and activation/deactivation of available  
> alternative content by the user
> A discovery mechanism and presentation of available media options for  
> user selection.

Such as a context menu for selecting the audio track? Stating this in less  
cryptic terms would help :)

>           + 3.8 Requirements on the parallel use of alternate content on  
> potentially multiple devices in parallel
> A discovery mechanism of available OS provided output device options for  
> user selection.

I'm not sure what this is about.

Philip Jägenstedt
Core Developer
Opera Software
Received on Monday, 19 July 2010 14:42:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:55:41 UTC