W3C home > Mailing lists > Public > w3c-wai-eo@w3.org > July to September 1999

Re: Request for review of accessibility note

From: Philipp Hoschka <ph@w3.org>
Date: Mon, 09 Aug 1999 20:42:59 +0200
Message-ID: <37AF2133.FB4C3FF9@w3.org>
To: Lloyd Rutledge <Lloyd.Rutledge@cwi.nl>
CC: symm@w3.org, Lynda.Hardman@cwi.nl, ij@w3.org, dd@w3.org, marja@w3.org, wai-liaison@w3.org, w3c-wai-eo@w3.org

thanks for this review ! I have forwarded it to the people on the cc:
list of the review request.

Lloyd Rutledge wrote:

> -----------
> The SMIL Boston draft [2] is now out, and it has many new features
> that apply to accessibility and would be worth discussing in this
> document.  Geoff Freed has posted to www-smil some comments regarding
> how SMIL Boston to address accessibility [3].

This won't work.
This document is explicitly about the accessibility features in SMIL
and not about which features should be implemented how in the next
version of SMIL. It is part of a series of documents describing
accessibility features in W3C Recs - there is already a similar 
document about CSS2 that has

> -------------------
> Can WAI offer any suggestions for player implementors on how to best
> handle opening new windows for users that don't favor them?

Good question. I think the idea is that you should issue a warning,
such as a beep, before opening a new window, so that the user knows
what has happened.
> ------------------------------
> The navigation bar suggested in 4.3 can be more easily implemented in
> SMIL Boston with the excl construct.  It could also address the issues
> in 4.1 and 4.2.  In all cases, users who want to see all of the links
> in the document in one, easy to perceive, all at once format, can have
> a "link bar" providing access to all link endpoints.  This could
> be turned on and off as a bar during the whole presentation.

probably true, but this document is about SMIL 1.0 only

> ----------
> [Section 3.1, para 1, 2nd to last sentence]
> > Style sheets may not be supported by all SMIL browsers, however.
> The SMIL accessibility document discusses the use of CSS in several
> places.  The use of CSS with SMIL is indeed important for SMIL's
> accessibility.  However, it should be noted that currently there is no
> SMIL browser that supports any use of CSS with SMIL, and this support
> has not been announced as forthcoming for any browser.

good point
> --------------
> The formatting of text is important for the use of closed captions and
> subtitles.  SMIL 1.0 may not provide all that is need for integrating
> formatted text into SMIL presentations.  

And why should it ? Formatting of the text depends on the media
object handler, not on SMIL 1.0.

>One problem is that different
> browsers do not display formatted or unformatted text consistently.

This is a problem with the text renderers used in current
but not with SMIL 1.0

> Another is that closed captions and subtitles frequently rely on
> transparency and ghosting.  This function is not currently implemented
> on SMIL browsers.  It is also unclear whether or not this function is
> encodable with formatted or unformatted text integrated into SMIL.
> It might be worthwhile to remark on the use of CSS to all formatted
> HTML (and XML?) text in a SMIL presentation on SMIL browsers.

this is a good idea

> ---------------------
> > 2.2.2 Auditory Descriptions
> >
> > Note. In CSS, authors may turn off auditory descriptions
> > using 'display : none', but it is not clear what value of 'display'
> > would turn them back on.
> It is also unclear how CSS used in SMIL could turn on and off auditory
> descriptions that are integrated in SMIL code.  

The CSS property "display" controls whether an element is displayed
or not. The element in question here is an SMIL media object containing
audio. If that is not clear, check the definition of the display
property in CSS2

>The integration of the
> audio would have to happen in the CSS code itself.

that is not true

> > In SMIL 1.0, auditory descriptions may be included in a presentation
> > with the audio element. However, SMIL 1.0 does not provide a mechanism
> > that allows users to turn on or off player support for auditory
> > descriptions.
> I've posted a suggestion, originally made by Geoff Freed, about the
> addition of a "system-audio-desc" test attribute [6].  Geoff's recent
> www-smil post applies here as well [3].  Perhaps we could include in
> the draft a discussion of this potential SMIL extension.  

no, that is not the goal of this note

>Perhaps we
> could go further by suggesting the addition of a WAI namespace
> extension for the attribute (assuming people are willing to implement
> it).  There are people who want to start working with this attribute
> right away.

good idea, but that should happen in a seperate document
> --------------------------------
> A problem with many of the examples is that timing information for the
> presentation has been placed in RealNetworks formats instead of in the
> SMIL code.  

to be fair: these examples are very similar to the examples in the
SMIL 1.0 specification

> One way to limit the number of external text files needed and still
> have the timing information in the SMIL code is to use data URL scheme
> for the text [4,5].  This way, all of the text used in in the SMIL
> file itself, and no external text files at all are needed.  For
> example, a piece of text that was originally included as
>   <text src="HelloJoe.txt">
> could instead be included through URI data schemes with
>   <text src="data:,Hello, Joe.">

This has two disadvantages
1) The size of the SMIL file will increase, as it will also include all
caption text. This will lead to delays, since the SMIL file is 
usually not streamed, whereas keeping captions in a seperate media
object allows simple streaming.
2) You cannot control the font in which the text will be rendered
3) Not all SMIL players implement the data URL scheme

> {@@@ Can anyone comment on the use of "|" instead of "#" to include
> different portions of the same, single external text file?}

you seem to imply that "|" means you extract parts of the source
file. that is not the case. "|" implies that the fragment-id can
be sent to the server in a network protocol, whereas "#" implies
that the fragment id is not sent to the server, but handled locally
in the client (see Xpointer spec).
> Given this, below is a recoding of the final example in section 2.
> The first 3 seconds of the video have one caption.  The second 3
> seconds have another caption.  The text is available in English and in
> Dutch.
> <seq>
>   <par>
>     <video src="video.mpg" clip-end="3s"/>
>     <switch> <!-- captions or subtitles  in Dutch -->
>       <text src="data:,Hallo, Joe."
>        system-captions="on"
>        system-language="nl"/>
>       <text src="data:,Hallo, Joe."
>        system-overdub-or-caption="caption"
>        system-language="nl"/>
>       <text src="data:,Hello, Joe."
>        system-captions="on"/>
>     </switch>
>   </par>
>   <par>
>     <video src="video.mpg" clip-begin="3s" clip-end="6s"/>
>     <switch> <!-- captions or subtitles in Dutch -->
>       <text src="data:,Hoe gaat het?"
>        system-captions="on"
>        system-language="nl"/>
>       <text src="data:,Hoe gaat het?"
>        system-overdub-or-caption="caption"
>        system-language="nl"/>
>       <text src="data:,How are you?"
>        system-captions="on"/>
>     </switch>
>   </par>
> </seq>
> This is, obviously, more SMIL code than in the original example.
> However, the total amount of code used here is smaller.  The original
> example also uses two external .rtx streams, which contain time code,
> most of which is redundant between the multiple text files.  The code
> here is smaller than the original SMIL code and the .rtx code
> combined.

the total size of text does not matter; what matters is how it
gets transported over the net - streamed, or via a single download.
In addition, in the example above, you have to download the captions 
for all languages, whereas when you use seperate caption files, 
only the captions for the language that the user selected need
to be transmitted

> Another advantage is that fewer files are used.  Originally, there
> was a .rtx stream for each language.  Here, no external files are used
> for the text.
> A third advantage is that here the timing code controls both the video
> and the text together, whereas in the original example the timing code
> embedded in the text stream controlled only the text.  With the SMIL
> encoding, if the video is slowed down, the timing of the text display
> slows with the video.
> This playing well would rely on browsers being able to seamlessly join
> the sequential video clips to play like one video.

I am a aware of at least one browser that does *not* join abbutting
Videoclips - does Grins support this ?

I think this is an interesting proposal, but not very practical, i.e.
we should not lead authors interested in making their SMIL applications
accessibile believe that this would actually work (similar to not
leading them to believe that using CSS would actually work).

I think the focus should be on assuming that the captioning file is
a media object - I think that is how most captioning formats work,
btw, not only the one by Real.
Received on Monday, 9 August 1999 14:44:15 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:55:46 UTC