RE: Captions for audio clips

SMIL doesn't provide for timing inside media objects - you can do that by
breaking them into pieces and using explicit timing (that's what would be
good to do). The W3C validator should now validate SMIL documents (well, XML
in general in theory, and SMIL is XML).

I agree that it is at least P2, although I am not sure if it isn't in fact
P1.

Charles McCN

On Wed, 15 Dec 1999, Bruce Bailey wrote:

  Dear Phill (et al.)
  
  IMHO, it is a clear case of P2!
  
  Populations effected:  Persons for whom English is a second language. 
   Persons who are not deaf but have impaired hearing.  Persons with learning 
  disabilities for whom processing auditory information is difficult (but not 
  impossible).
  
  The assumption is that ALL of the above persons might very well PREFER an 
  audio stream for the SAME REASONS everyone else prefers audio over a text 
  transcript.  Is it a useful exercise for use to delineate why an aural 
  presentation is better (in some cases) than a textual one?
  
  >From this perspective, the situation is very analogous to persons with VERY 
  poor vision who STILL PREFER a GUI browser!  We are empathetic / 
  sympathetic to this orientation.  Just as we accommodate the partially 
  sighted, so should we adjust for the hard of hearing.
  
  For the above populations, "unimedia audio" represents a significant 
  barrier to their access of content (we are using RealAudio radio broadcasts 
  as an example).
  
  For the above populations, a separate transcript has so little value as to 
  be virtually useless -- just as access to Lynx is not well regarded a 
  viable option for web surfing by most persons with vision impairments (nor 
  most average people for that matter).
  
  It is, of course, important to have techniques on hand, but that should not 
  influence the assignment of Priorities.
  
  Does anyone have an example of captioned audio?
  
  I experimented with some SMIL file on my local hard drive.  I could get 
  RealAudio (actually a .rm RealMedia file) to play ONLY the sound (with 
  synchronized captions), but I could NOT get rid of the blank video window. 
   Probably I am just doing something wrong, but I did look at the W3C SMIL 
  specifications.  Does the W3C offer a SMIL validation service?
  
  Bruce Bailey
  
  
  On Sunday, December 12, 1999 11:52 PM, Charles McCathieNevile 
  [SMTP:charles@w3.org] wrote:
  > Phill, if you are just reading it then that is the case. However for 
  people
  > who have marginal hearing, having the sound and the captions/score 
  available
  > and synchronized is more valuable than one or the other (similarly for 
  people
  > who can hear, but have difficulty reading). One of the challenges we face 
  is
  > that there are people who are looking for multi-modal support - there are
  > more people with poor hearing than there are with no hearing (and 
  similarly
  > for other disabilities).
  
  On Wednesday, December 15, 1999 11:45 AM, pjenkins@us.ibm.com 
  [SMTP:pjenkins@us.ibm.com] wrote:
  > JW:
  >> It appears to be broadly agreed within the group that a requirement to
  >> synchronize text transcripts with audio presentations should be
  >> established, at least at a priority 2 level.
  >
  > PJ:
  > Where is the broad agreement?  Bruce, Jason, and Charles seem to agree 
  with
  > P2.  I'm arguing for P3, and Robert and Eric seem OK with either P2 or 
  P3,
  > and I haven't heard form others.  I do agree that there seems agreement
  > that we need to make the distinction between multimedia videos and 
  unimedia
  > sounds files in the errata so that WCAG 1.4 doesn't apply to the unimedia
  > sound only files.
  [snip]
  > PJ:
  > but I've heard no supporting rationale or any convincing evidence that
  > suggests that the "value" is more than useful and improves accessibility
  > [P3].
  >
  > Because the deaf,  [learning disabled, or those learning a foreign
  > language] are so comfortable now with synchronized television (and movie)
  > captioning, does not support the argument that they will be comfortable 
  or
  > have significant barriers removed with synchronized captioned audio only
  > files.  Can anyone even show me an sample example, or better yet, a real
  > example on the Web or anywhere?  If we don't add a supporting technique, 
  a
  > checkpoint requiring [even at P3] synchronized captions for audio only
  > files shouldn't even be added to the guidelines.  I've seen natural
  > language courses use techniques of synchronization to TEACH the language,
  > but we're talking about guideline 1 - equivalent alternative information 
  -
  > not "teaching natural languages" or "teaching singing".  We have been
  > talking about ideas and theories, how can we suppose that it fits the
  > definition of "significant barriers".  P3 is still "valuable" and 
  "useful"
  > and "improves accessibility".
  

--
Charles McCathieNevile    mailto:charles@w3.org    phone: +61 409 134 136
W3C Web Accessibility Initiative                    http://www.w3.org/WAI
21 Mitchell Street, Footscray, VIC 3011,  Australia (I've moved!)

Received on Monday, 20 December 1999 02:41:10 UTC