W3C home > Mailing lists > Public > whatwg@whatwg.org > October 2007

[whatwg] several messages

From: Ian Hickson <ian@hixie.ch>
Date: Fri, 19 Oct 2007 01:09:05 +0000 (UTC)
Message-ID: <Pine.LNX.4.62.0710190049560.13219@hixie.dreamhostps.com>
On Fri, 8 Jun 2007, Dave Singer wrote:
> 
> Proposal: user settings that correspond to a accessibility needs. For 
> each need, the user can choose among the following three dispositions:
> 
>   * favor (want): I prefer media that is adapted for this kind of
> accessibility.
>   * disfavor (don't want): I prefer media that is not adapted for this kind of
> accessibility.
>   * disinterest (don't care): I have no preference regarding this kind of
> accessibility.
> 
> The initial set of user preferences for consideration in the selection of
> alternative media resources correspond to the following accessibility options:
> 
>   captions (corresponds to SMIL systemCaptions)
>   descriptive audio (corresponds to SMIL systemAudioDesc)
>   high contrast video
>   high contrast audio (audio with minimal background noise, music etc., so
>   speech is maximally intelligible)
> 
> This list is not intended to be exhaustive; additional accessibility options
> and corresponding preferences may be considered for inclusion in the future.
> 
> Herein we describe only those user preferences that are useful in the 
> process of evaluating multiple alternative media resources for 
> suitability. Note that these proposed preferences are not intended to 
> exclude or supplant user preferences that may be offered by the UA to 
> provide accessibility options according to the W3C accessibility 
> guidelines, such as a global volume control 
> <http://www.w3.org/TR/WAI-USERAGENT/uaag10-chktable.html>.

This all seems reasonable, but does the spec need to mention any of this? 
It already allows the user agent to pick alternative media resources based 
on the whims of the user or the user agent.


> 2) Allow the UA to evaluate the suitability of content for specific 
> accessibility needs via CSS media queries
> 
> Note that the current specification of <video> and <audio> includes a 
> mechanism for selection among multiple alternate resources 
> <http://www.whatwg.org/specs/web-apps/current-work/#location>. The scope 
> of our proposal here is to extend that mechanism to cover accessibility 
> options.
> 
> Proposal: the media attribute of the <source> element as described in 
> the current working draft of Web Applications 1.0 takes a CSS media 
> query as its value <http://www.w3.org/TR/css3-mediaqueries/>, which the 
> UA will evaluate in the process of selecting an appropriate media 
> resource for presentation. To extend the set of media features that can 
> be queried to include accessibility preferences, we define a new media 
> feature for each supported accessibility preference:
> 
>   captions
>   descriptive-audio
>   high-contrast-video
>   high-contrast-audio
> 
> For each of these media features the following values are defined:
> 
>   * The user prefers media adapted for this kind of accessibility (": want").
>   * The user prefers media that is not adapted for this kind of accessibility
>     (": dont-want").
>   * The user has expressed no preference regarding this kind of accessibility
>     (": either").
> 
> For each media feature that corresponds to accessibility preferences, an 
> expression evaluates to FALSE if and only if the user has an explicit 
> preference (want or don't want), and the media feature has a value of 
> want or dont-want that doesn't correspond.  For all other combinations 
> (user disinterest or a value of "either"), then the expression evaluates 
> to TRUE.
> 
> Example. If the user has asked for
>   captions:  want
>   high contrast video:  don't want
> 
> and the video element has
> <video ... >
>   <source media="all and (captions: dont-want)" ... />
>   <source media="all and (captions: either)" ... />
> </video>
> 
> The second source will be selected for presentation; the second would 
> also be selected if the media attribute were completely omitted.
> 
> Once a candidate source has been selected, the UA must attempt to apply 
> the user's accessibility preferences to its presentation, so that 
> adaptable content is presented appropriately.

This seems reasonable. I recommend implementing these with vendor-specific 
prefixes, and proposing them to the CSS working group.


On Sat, 9 Jun 2007, Benjamin Hawkes-Lewis wrote:
> 
> Three cheers for Apple for trying to tackle some of the accessibility 
> issues around video content! :) Without trying to assess whether CSS 
> media queries are the best approach generally, here's three particular 
> issues I wanted to raise: [...]

I agree with these comments and would recommend they be taken into account 
in the design of the media query features.


> 2. Conflict resolution
> 
> The proposal does not describe how conflicts such as the following would 
> be resolved:
> 
> User specifies:
> 
> captions: want
> high-contrast-video: want
> 
> Author codes:
> 
> <video ... >
>   <source media="all and (captions: want;high-contrast-video: dont-want)" ...
> />
>   <source media="all and (captions: dont-want;high-contrast-video: want)" ...
> />
> </video>
> 
> Because style rules cascade, this sort of conflict doesn't matter when 
> media queries are applied to styles. But you can only view one video 
> source.

Here, neither would be picked, since neither matches.

It's like having:

   @media all and (min-width: 100px; max-height: 100px) { };
   @media all and (max-width: 100px; min-height: 100px) { };

...when you have a viewport that's 50x50 or 500x500.


> 3. (Even more) special requirements
> 
> The suggested list of media features is (self-confessedly) not exhaustive.
> Here's some things that seem to be missing:
> 
> a) I should think sign-language interpretation needs to be in there.
> 
> sign-interpretation: want | dont-want | either (default: want)
>
> b) Would full descriptive transcriptions (e.g. for the deafblind) fit 
> into this media feature-based scheme or not?
> 
> transcription: want | dont-want | either (default: either)
> 
> c) How about screening out visual content dangerous to those with
> photosensitive epilepsy
> 
> max-flashes-per-second: <integer> | any (default: 3)
>
> d) Facilitating people with cognitive disabilities within a media query 
> framework is trickier. Some might prefer content which has been stripped 
> down to simple essentials. Some might prefer content which has extra 
> explanations. Some might benefit from a media query based on reading 
> level.
> 
> reading-level: <integer> | basic | average | complex | any (default: any)

These seem like useful things to investigate, indeed.


On Sun, 10 Jun 2007, Benjamin Hawkes-Lewis wrote:
> 
> But if UAs can apply accessibility preferences to a catch-all <source> 
> listed last, then what's the advantage of creating multiple <source> 
> elements in the first place?

A video designed for someone with a low cognitive ability needing video 
with an embedded sign language interpreter and no epilepsy-triggering 
flashes (poor guy) isn't really something you can derive from a video 
stream primarily intended for high-functioning users with no disablities. 
You can add subtitles, you can switch audio streams, but there comes a 
point where you just need a new video.


> Current container formats can include captions and audio descriptions. 
> So is the problem we're trying to solve that container formats don't 
> contain provision for alternate visual versions (high contrast and not 
> high contrast)? Or are we trying to cut down on bandwidth wastage by 
> providing videos containing only the information the end-user wants?

Both.


> The reason I was thinking of using a CSS property was that signed 
> interpretation is not the same as signing featured in the original 
> video. But it's true that information about what sign languages are 
> available is important, so a CSS property alone wouldn't solve the 
> problem. Maybe we need new attributes to crack this nut:
> 
> <source contentlangs="en,sgn-en" captionlangs="sgn-en-sgnw,fr,de,it,sgn" 
> dubbinglangs="fr" subtitlelangs="de,it" 
> signedinterpretationlangs="sgn-en,sgn-fr,sgn-de,sgn-it" ...>

Good lord. :-)


> Granted it's a sledgehammer, but it does provide the fine-grained 
> linguistic information we need. It would also seemingly remove the need 
> for putting a caption media query on <source>.

I'd be interested in hearing of any implementation experience for this 
kind of thing. What do actual videos with subtitles and embedded sign 
language video need, in practice?

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Thursday, 18 October 2007 18:09:05 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:08:37 UTC