Re: longdesc spec text

On Mon, May 2, 2011 at 12:41 PM, Laura Carlson
<laura.lee.carlson@gmail.com> wrote:
>>> I'm also wondering if we should add the conformance checkers and
>>> authoring tool paragraph from my first attempt at writing the spec
>>> text [3]. It is:
>>>
>>> "Conformance checkers and authoring tools should inspect the URL and
>>> issue a warning if they suspect that the description resource is
>>> unlikely to contain a description of the image (i.e., if the URL is an
>>> empty string, or if it points to the same URL as the src attribute
>>> unless the document contains an id that matches a longdesc#anchor, or
>>> if it is indicative of something other than a URL.)"
>>
>> I do think it would help to provide guidance to conformance checkers,
>> but I'm not sure this is the right guidance. In particular, this
>> guidance implies that (1) they know the URL of the document being checked,
>> and (2) that can request further URLs in order to check them for IDs.
>>
>> We know that HTML validators often have to deal with string input rather
>> than a URL, and we know from Henri that at least one implementor is
>> reluctant to download external resources.
>
> Did you get that impression from Henri? He said that it would be
> trivial to add some checks.

But he also said:

> I think making machine-checkable conformance a property of the HTML file
> (and the protocol headers it was supplied with) makes the concept more
> tractable than making machine-checkable conformance depend on the
> external resources the HTML file refers to. That's why if longdesc were
> reinstated, I wouldn't want to make its machine-checkable conformance
> depend on external resources. However, if we find a that other features
> have extremely compelling reasons to have their machine-checkable
> conformance depend on external resources, then we might as well make the
> machine-checkable conformance of longdesc depend on external resources,
> too.

http://lists.w3.org/Archives/Public/public-html/2011Mar/0723.html

Laura continued:
> Someone proposed this text to me:
> "Conformance checkers and authoring tools should inspect the
> description resource URI and issue a warning if the URI cannot
> reference a text description of the image (i.e., if the URI is empty
> or otherwise invalid, if the URI reference has a mime type other than
> text/*)"

All these checks involve downloading external resources, which is what
Henri is trying to avoid above.

> Leif has proposed requiring longdesc URLS to have #fragment so that
> they could be more machine checkable. It would make it more complex
> for humans but tools could catch more errors. What do you think?

I don't think making things more complex for authors is a good idea.

> A conformance class for HTML5 link checkers is a possibility. What
> "HTML5 link checkers" currently exist?

"HTML5 link checkers" as opposed to link checkers? None. I suggest
the phrase purely as a parallel to the "HTML5 validator" phrase
mentioned in the spec.

> Do authors use link checkers more than validate?

I doubt it.

> Maybe we could have some type of longdesc rules for both classes of
> tools so more authors get longdesc right?

That seems reasonable.

When a document does not have a <base> element, a conformance checker
could verify that *relative* @longdesc URLs reference a fragment of the
document that exists but does not contain the image element itself.

This is implicit in the text ("The link must point to either a
different document from the image or a fragment of the same document
that does not contain the image"). I don't have a objection to
making it explicit if we think that would be useful.

I do worry a bit about the potential complexities here. There are a lot
of subtle things that could go wrong with a long description reference.
Long descriptions could fail to render because of @hidden, <noscript> in
a script-executing UA, JS modifications, or CSS skinning. A long
description contain no text.

If a long description fragment is subject to @hidden or <noscript> but
the image is not, or if the fragment is empty or whitespace, a
conformance checker could error I guess.

Something network aware like a link tracker seems a better place to start
making requirements that depend on CSS or JS being applied, if indeed
that's reasonable at all. As soon as you start requesting resources
over the network, things get a lot more complicated because of things
like cookies and authentication and so on.

I think Richard suggested we should try and prohibit long descriptions
referencing documents in a different format. I'm not convinced that's
required for accessibility, so I'm inclined not to do that.

--
Benjamin Hawkes-Lewis

Received on Monday, 2 May 2011 14:09:47 UTC