RE: Issues from Microsoft ISSUE #2 Testability of Audio Descriptions

Hi Cynthia,


Interesting thought.


I think 1.x series is quite different than what you are proposing for 1.2.x


We try to say "what"  should be done but not how.  

In the 1.x we broke it up because different 'whats' had to be done with the
different types of content. (e.g. one had to be a text alternative while
another only a label)

What you propose is the same what (audio description) and you are just
describing all of the pieces of the same item (a movie) that would need the
audio description.  It is not clear that you have them all. It is also not
clear that all of those need description.  For example, if someone comes
into the room and talks, and it is clear from their voice who they are there
in no audio description needed.   


RE Testability ,  please note that we do not require 'good' audio
descriptions.  That would make it un-testable (reliably).  We only require
the presence of audio descriptions - and then point to resources that people
can use to see how to do it right.  Joe's material are included already in
the resources for doing this.


So I don't think we will want to break 1.2 up into 4 separate (and new)
level 1 success criteria.  


But if you still think this should be considered let me know right away
(just drop me an email) and I'll see to it that it gets on the next survey
due Monday night so we can get results in time to have a chance of building
the necessary "How to meet " sections of Understanding WCAG 2.0 and the
technique docs for them. 



 -- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Ind. Engr. & BioMed Engr.
Director - Trace R & D Center 
University of Wisconsin-Madison 
The Player for my DSS sound file is at




From: [] On Behalf
Of Cynthia Shelly
Sent: Friday, March 10, 2006 6:07 PM
To: Gregg Vanderheiden;
Subject: RE: Issues from Microsoft ISSUE #2 Testability of Audio

I think that we should have guidance on what would be considered an
equivalent audio description in various circumstances, as we do with
alternative text in success criteria 1.1.1 - 1.1.5.   This would go a long
way towards helping people understand what is adequate audio description,
and test whether has been provided.  At the very least, we should describe
an equivalent audio description for the visual items listed in the
definition of audio description.  I don't know if that is sufficient, but
it's a start.  I've drafted something along these lines. 


BIG CAVEAT:  I am not, by any stretch of the imagination, an expert on audio
description.  These are probably wrong, and there are probably other
situations that can be described in objectively testable terms.


Joe Clark, I could really use your help with this.  


Here they are:

1.2.2.a For actions that aren't auditorially apparent, the action is
described including subject verb and object

1.2.2.b For characters, the arrival of a character on screen is described,
the first time a character speaks is described.  The visual appearance of
the character is described the first time the character appears, and each
time the appearance changes.

1.2.2.c For scene changes, the new scene is described.

1.2.2.d For on-screen text, such as titles, credits, and subtitles in
subtitles in a foreign-language production, the text is read out loud





From: Gregg Vanderheiden [] 
Sent: Thursday, March 09, 2006 9:32 PM
To: Cynthia Shelly;
Subject: RE: Issues from Microsoft

Hi Cynthia


Good comments.  Thanks.


Couple of notes / questions.


RE Issue #1 - Question to everyone - can you post any information you have
on tools that address Cynthia's question/comment - so that we can be sure to
log them and use them per comment below. 


RE Issue #2 - Cynthia, can you say specifically what information is more
specific and objective.  Would be great to capture that.  Better yet - can
you provide a specific suggested edit for including that in the definition
of audio description?   That would greatly increase our ability to consider
it for inclusion.


RE Issue #3

 - 4.1.5 is not about ensuring they stay exposed really.  It is about making
sure that AT know when some of the thousands of elements it can 'see' have
changed or been deleted without having to keep checking each of them every

-   4.1.1 is a completely different topic from the rest of 4.1.x items.  So
much so that we considered for awhile separating it.  

- I just noticed that the wordings in your post below are not the currently
proposed wordings  (see previous posting and the survey)  the new proposed
wordings are pasted below for convenience.   


Do these work better for you?  







{NOTE  I put Web Units in where they would now go replacing the old term}

4.1.1 Web Units
Applications%29&action=edit>  can be parsed unambiguously and the
relationships in the resulting data structure are also unambiguous. 

4.1.2 For each user interface component in the content, the name
<> , role
<> , and all
perceivable properties
on=edit>  can be programmatically determined
&action=edit> . 

4.1.4 Content and properties of user interface components can be
programmatically set
=edit>  directly to any values to which they can be set through the user

Note: Some examples of standardized properties that typically can be changed
by the user interface include its value, whether it is currently selected,
and whether it currently has the focus. 

4.1.5 Any changes to user interface components in the content can be
programmatically determined
&action=edit>  without having to compare current and past values to detect


Becky also suggested 

4.1.4 Values for content and attributes of user interface components which
can be set through the user interface can be set programmatically.





 -- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Ind. Engr. & BioMed Engr.
Director - Trace R & D Center 
University of Wisconsin-Madison 
The Player for my DSS sound file is at




From: [] On Behalf
Of Cynthia Shelly
Sent: Thursday, March 09, 2006 6:56 PM
Subject: Issues from Microsoft

I did an internal review of the most recent WCAG 2.0 draft with several
people from around Microsoft.  Here is the list of issues we're concerned
about.  These are roughly in priority order.  

MS Issue #1 

Tools for real-time captioning of streaming audio and video.  The techniques
for xxx show markup in SMIL, which doesn't seem like it could be done in
real-time on live broadcasts.  Are there tools for captioning streaming
media in real-time?  Are those tools inexpensive and simple enough that
small shops could use them to caption live media?  If not, we don't think
this can be a requirement at level 2, perhaps not at all.  If there are such
tools, we need techniques about them. 

MS Issue #2 

We're concerned that the audio description requirement is not very testable.
The definition of audio description includes the word "important", which is
pretty subjective.  There are links in the understanding document to more
detailed and objective standards of what needs to be described.   We think
this type of information needs to be in the normative document. 

MS Issue #3 

Inconsistent language between 4.1.1, 4.1.2 and 4.1.5.  

4.1.1 Delivery units can be parsed unambiguously and the relationships in
the resulting data structure are also unambiguous. 

4.1.2 The role, state, and value can be programmatically determined for
every user interface component in the Web content that accepts input from
the user or changes dynamically in response to user input or external events

4.1.5 Changes to content, structure, selection, focus, attributes, values,
state, and relationships of the user interface elements in the Web content
can be programmatically determined. 

4.1.1 and 4.1.2 are about exposing the properties initially, and 4.1.5 is
about ensuring that they stay exposed (and accurate) if they change.  Can we
make the language in these three SCs more consistent, so it's easier to
understand the relationship between them?  The language in 4.1.5 seems to be
the most complete, so I'd vote for making the other two more like it. 

MS Issue # 4 

The success criteria in 4.1 don't seem to be about future technologies.
They're about ensuring that the user interface is operable through assistive
technology.  Perhaps they should be under Principal 2?  If they don't fit
under any of the existing guidelines there, maybe we need a guideline there
about properties of UI elements being available to AT. 


Received on Saturday, 11 March 2006 15:59:55 UTC