- From: Marja-Riitta Koivunen <marja@w3.org>
- Date: Tue, 21 Dec 1999 03:24:27 -0500
- To: Charles McCathieNevile <charles@w3.org>, "webmaster@dors.sailorsite.net" <webmaster@dors.sailorsite.net>
- Cc: "w3c-wai-gl@w3.org" <w3c-wai-gl@w3.org>, "'pjenkins@us.ibm.com'" <pjenkins@us.ibm.com>
At 02:41 AM 12/20/99 -0500, Charles McCathieNevile wrote: >SMIL doesn't provide for timing inside media objects - you can do that by >breaking them into pieces and using explicit timing (that's what would be >good to do). If synchronization becomes so important should we try to talk to SMIL WG to be able to do a SMIL file that contains both the text and timecodes? Or maybe it should be a separate standard? Having one file for each text line that needs to be synchronized is really cumbersome. And if the synchronization changes the files need to change as well. A good tool may help to manage this but still I would prefer the timecodes with the text. Another thing is that the user should be able to control what media is shown. The author design can be a default, but if the author has not thought of every necessary combination, the user should still be able to change it. One question: if we say synchronization is necessary then what is the amount of synchronization that is needed? Every sentence? Every paragraph? I think synchronization is often at least P3 (often probably more) but in some cases I would also like to be able to just read the text and not care about the synchronization at all. Is there a way to have both? A SMIL synchronization file that refers to different text portions in a text transcript file (without forcing it to be divided into several files)? Or just being able to ignore the time codes when reading synchronized text? Marja > The W3C validator should now validate SMIL documents (well, XML >in general in theory, and SMIL is XML). > >I agree that it is at least P2, although I am not sure if it isn't in fact >P1. > >Charles McCN > >On Wed, 15 Dec 1999, Bruce Bailey wrote: > > Dear Phill (et al.) > > IMHO, it is a clear case of P2! > > Populations effected: Persons for whom English is a second language. > Persons who are not deaf but have impaired hearing. Persons with learning > disabilities for whom processing auditory information is difficult (but not > impossible). > > The assumption is that ALL of the above persons might very well PREFER an > audio stream for the SAME REASONS everyone else prefers audio over a text > transcript. Is it a useful exercise for use to delineate why an aural > presentation is better (in some cases) than a textual one? > > >From this perspective, the situation is very analogous to persons with VERY > poor vision who STILL PREFER a GUI browser! We are empathetic / > sympathetic to this orientation. Just as we accommodate the partially > sighted, so should we adjust for the hard of hearing. > > For the above populations, "unimedia audio" represents a significant > barrier to their access of content (we are using RealAudio radio broadcasts > as an example). > > For the above populations, a separate transcript has so little value as to > be virtually useless -- just as access to Lynx is not well regarded a > viable option for web surfing by most persons with vision impairments (nor > most average people for that matter). > > It is, of course, important to have techniques on hand, but that should not > influence the assignment of Priorities. > > Does anyone have an example of captioned audio? > > I experimented with some SMIL file on my local hard drive. I could get > RealAudio (actually a .rm RealMedia file) to play ONLY the sound (with > synchronized captions), but I could NOT get rid of the blank video window. > Probably I am just doing something wrong, but I did look at the W3C SMIL > specifications. Does the W3C offer a SMIL validation service? > > Bruce Bailey > > > On Sunday, December 12, 1999 11:52 PM, Charles McCathieNevile > [SMTP:charles@w3.org] wrote: > > Phill, if you are just reading it then that is the case. However for > people > > who have marginal hearing, having the sound and the captions/score > available > > and synchronized is more valuable than one or the other (similarly for > people > > who can hear, but have difficulty reading). One of the challenges we face > is > > that there are people who are looking for multi-modal support - there are > > more people with poor hearing than there are with no hearing (and > similarly > > for other disabilities). > > On Wednesday, December 15, 1999 11:45 AM, pjenkins@us.ibm.com > [SMTP:pjenkins@us.ibm.com] wrote: > > JW: > >> It appears to be broadly agreed within the group that a requirement to > >> synchronize text transcripts with audio presentations should be > >> established, at least at a priority 2 level. > > > > PJ: > > Where is the broad agreement? Bruce, Jason, and Charles seem to agree > with > > P2. I'm arguing for P3, and Robert and Eric seem OK with either P2 or > P3, > > and I haven't heard form others. I do agree that there seems agreement > > that we need to make the distinction between multimedia videos and > unimedia > > sounds files in the errata so that WCAG 1.4 doesn't apply to the unimedia > > sound only files. > [snip] > > PJ: > > but I've heard no supporting rationale or any convincing evidence that > > suggests that the "value" is more than useful and improves accessibility > > [P3]. > > > > Because the deaf, [learning disabled, or those learning a foreign > > language] are so comfortable now with synchronized television (and movie) > > captioning, does not support the argument that they will be comfortable > or > > have significant barriers removed with synchronized captioned audio only > > files. Can anyone even show me an sample example, or better yet, a real > > example on the Web or anywhere? If we don't add a supporting technique, > a > > checkpoint requiring [even at P3] synchronized captions for audio only > > files shouldn't even be added to the guidelines. I've seen natural > > language courses use techniques of synchronization to TEACH the language, > > but we're talking about guideline 1 - equivalent alternative information > - > > not "teaching natural languages" or "teaching singing". We have been > > talking about ideas and theories, how can we suppose that it fits the > > definition of "significant barriers". P3 is still "valuable" and > "useful" > > and "improves accessibility". > > >-- >Charles McCathieNevile mailto:charles@w3.org phone: +61 409 134 136 >W3C Web Accessibility Initiative http://www.w3.org/WAI >21 Mitchell Street, Footscray, VIC 3011, Australia (I've moved!) >
Received on Tuesday, 21 December 1999 03:26:54 UTC