RE: Captions for audio clips

I respectfully disagree.  Reading synchronized captions is MUCH easier and 
MUCH more functional than referring to a separate transcript.  So much so 
that P2 might even be too LOW.  Think of this from the perspective of a 
person who is deaf and who is quite comfortable with (closed) captions on 
television.

I also have to say again that if (1) one's video is already in "Real Player 
G2" format (very popular), and (2) you already have a text transcript, then 
(3) creating synchronized captions (via SMIL) is VERY easy.

If anyone cares to provide an example (1) and a corresponding (2), I would 
be pleased to volunteer creating (3).

-- Bruce Bailey


On Friday, December 10, 1999 4:05 PM, pjenkins@us.ibm.com 
[SMTP:pjenkins@us.ibm.com] wrote:
> I would argue that even priority 2 is too high.  If the listener has some
> residual hearing, then the visual synchronized captions are only aiding 
or
> making it easier to get the information.    The definition of Priority 3 
is
> :
> "A Web content developer may address this checkpoint. Otherwise, one or
> more groups will find it somewhat difficult to access information in the
> document. Satisfying this checkpoint will improve access to Web 
documents.
> "
> I do not feel that adding visual captions to audio clips is removing
> "significant barriers" [see P2 definition].  I am also assuming that 
volume
> control and play back controls on the user agent will provide the access 
to
> the audio information that the user with residual hearing may need.
> Remember,  as the residual hearing approaches zero, the benefit of visual
> synchronized captions approaches zero, but never gets there because
> synchronized timed presentation of the text captions gives indication to
> rhythm or timing of the text - but, which is something that can be
> approached  - with good punctuation, hence requiring only a P3.
>
> Regards,
> Phill Jenkins

Received on Friday, 10 December 1999 18:16:37 UTC