W3C home > Mailing lists > Public > public-html@w3.org > July 2007

Re: conflation of issues or convergence of interests?

From: Sander Tekelenburg <st@isoc.nl>
Date: Mon, 30 Jul 2007 02:32:49 +0200
Message-Id: <p06240612c2d2c12fe362@[192.168.0.101]>
To: public-html@w3.org

At 12:10 +1000 UTC, on 2007-07-29, Lachlan Hunt wrote:

> Sander Tekelenburg wrote:
>> At 12:53 +1000 UTC, on 2007-07-28, Lachlan Hunt wrote:

[... different meanings of "accessibility"]

> <http://accessites.org/site/2006/10/the-great-accessibility-camp-out/>
>
> I fit into Camp 2, as described in that article.  The term web
> accessibility refers to issues relating to disabilities.  That doesn't
> mean I don't care about other types of issues, just that I don't group
> them under that term.

OK, so to you "accessibility" means making content accessable to specific
groups with specific physical/mental disabilities. And "universaility" means
making content accessable to any user.

Is that correct?

If so, then for the sake of the argument I'll, for the moment, go along with
how you use those terms. But that only solves our understanding of what we
mean. It doesn't change the fact that IMO it makes no sense to spec
accessibility features without first ensuring universality.

We can allow for universaility by designing HTML such that [1] all non-text
can be authored with a textual equivalent, [2] that textual equivalent can
contain markup and [3] the techniques to provide that marked up textual
equivalent and its relation to other alternates are as unified as possible
(because that's the easiest for authors and thus likelier that they will
bother).

Such universaility would provide at least basic accessibility. Next would be
to develop ways to allow authors to provide better accessibility by targeting
specific groups. Your example of captioned text for example, but also things
like @scope or @headers.

My point is that going at it the other way around makes no sense. If we
concentrate on accessibility features to allow authors to cater to specific
groups, we automaticaly ignore other groups. There is no way to achieve
universaility that way. (Providing a captioned video doesn't help users who
for whatever reason cannot consume video. Only those who can consume video
but not the audio of it.)

[...]

>> this would be similar to the having @alt and an image's
>> caption being the same, as we discussed earlier. Alternatives are to be
>> chosen from, not to be presented as if they are complimentary. You don't
>>want
>> your alt text rendered next to the image. You want to provide/consume
>> *either* one (at a time).
>
> I disagree with that comparison.  In some cases, it is appropriate to
> show both as complimentary.

When they are equivalents they are equivalents. there will always be room for
disagreement in specific cases: "is x round or square?" But once it is
defined as round, it cannot also be square.

I completely agree though that both should be available to users. In fact,
that's what I am arguing for when I say that UAs must make equivalents
discoverable and accessible to users. Seems to me we agree. However:

> Here's an example of a presentation I
> published earlier this year.
>
> http://lachy.id.au/dev/presentation/future-of-html/
>
> Both the audio and the transcript are available to everyone, and are
> complimentary for several reasons.

(I haven't looked at the PowerPoint file.) The audio file is obviously an
equivalent of the text. The only reason to mark them up as if they are
instead complimentary is that currently neither HTML nor UAs allow you to
mark them up for what they are, equivalents, and let users easily discover
and access both.

Those limitations result in a situation where the only way to determine that
the audio and text are equivalents is for a *human* to examine both. For me,
who can both see and hear pretty well, that means having to listen to the
entire audio file, while reading along with the text. Pretty intensive labour
to find out if I would miss some content by consuming only the text or the
audio file.

And that's still luxury, because a user (human or other) who for whatever
reason cannot consume the audio file has absolutley no way of finding out
that it is an equivalent of the text.

So your example just confirms the need for what I'm arguing for: that authors
need a (simple) mechanism to mark up equivalents as being equivalents, and
that UAs need to ensure that users can easily access equivalents.

(Btw, Anne, this is a perfect example of what I meant in
<http://www.w3.org/mid/p0624060ac2d15b770c35@%5B192.168.0.101%5D>. Looking at
authoring practices while ignoring the rationales behind them will generate a
flawed picture.)

> * A user may want to listen to the audio and follow along in the transcript.
> * A user may want to quote a portion of it and the transcript allows
> them to easily copy and paste.
> * A user may want to quickly reference some section of the speech, which
> is easier to do in the transcript than it is to seek to the right place
> in the audio.

Exactly!

> But the transcript is also an alternative to the audio for users who
> can't hear it.

Exactly! ;) How the heck can he know that though?

[...]

> I said *readily available*.  Perhaps I should have said easily instead.
>   Selecting properties from the context menu and then reading the alt
> text in the dialog box is not particularly easy or discoverable for many
> users.

Agreed. It seems likely that better UIs are possible.

> However, unlike video or audio, it's rare that users who can see
> the image would want to read the alt text.

Perhaps, maybe, I don't know. It doesn't seem rare enough to ignore that need
though. The user might simply not understand what you mean to convey with the
image, or rely on heavy screen magnification, seeing only a few pixels at a
time, or be an author wanting to test whether his alt text is appropriate.

> But with multimedia, there
> are a variety of reasons why a user would choose the alternative format
> even if they could access the media.

Well, even assuming that indeed with multimeda that need is more widespread,
we seem to agree that there is a need to allow users to discover and access
equivalents easily. And I suspect that we also agree that ideally the way to
author that would be as unified as possible across elements.

> Consider how difficult it is for a user to access the alternative
> content nested within <object>.  AFAIK, the only way to do so in most
> graphical browsers is to view the source.

I think for <object> the main difficulty is that it can contain so many
different types. If an object contains an image, the user can reload the page
with auto-image loadin switched off. If it contains Flash, he can disable
Flash to get the fallback. Etc. A better UA would perhaps allow the user to
one-click disable all non-text.

But yes, not all UAs make this easy, and then still it isn't comfortable. So
again, it seems to me that we agree that UAs need to make it possible for
users to access equivalents easily.

> So, if an audio file was
> embedded using <object> (or <audio>) and the transcript was nested
> within, that would make it difficult for users without assistive
> technology to access it.

What makes you think that that problem is caused by that markup structure?
Looks more like a UA functionality issue to me. By embedding alternates, all
the author does is [1] provide equivalents and [2] define the order of the
default fallback cascade. It doesn't prohibit UAs from accessing any of the
equivalents.


-- 
Sander Tekelenburg
The Web Repair Initiative: <http://webrepair.org/>
Received on Monday, 30 July 2007 00:40:12 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:38:47 UTC