W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > April to June 2007

RE: Editorial Survey #1 is up

From: Bailey Bruce <Bailey@Access-Board.gov>
Date: Tue, 15 May 2007 13:19:25 -0400
Message-ID: <23EB0B5A59FF804E9A219B2C4EF3AE3DA4883D@Access-Exch.Access-Board.gov>
To: <w3c-wai-gl@w3.org>

I think maybe we should split the draft search technique from the draft
URI hacking technique?  That way we can discuss them separately.

> One problem is that the user may not be able to tell what site an
"opaque" URI is part of, hence how to find the site-specific search

Anyone have examples of this?

The only examples in my memory were hosted at location with numerical
(IP address) URI, and those documents did not have alternative versions.

But okay, so maybe that is a conformance failure?  Or at least they
could not claim a success based on URI hacking.  Or we could build in a
requirement for transparent access to the site search feature.

> it may be necessary to fall back on a generic search engine. 

Is anyone supporting reliance on generic search engine behavior as a way
to pass WCAG 2.0 conformance claim checkpoint #4?  I certainly am not!

I have only been using generic search engine to demonstrate how
accessible versions (or index pages) *can* be turned up from the file
name or document title.

If a site were to base WCAG 2.0 conformance on the behavior of a third
party search engine, then yes, that is fragile, and they are holding
themselves hostage to changing behavior of the party search engine.  I
still do not understand how this is a problem.

I am still hoping for two or more examples of sites (that have
accessible versions) where the accessible versions cannot be turned up
from the corresponding inaccessible versions.  Should I throw this
request over the wall to the WAI-IG?  They are low traffic nowadays too.
Received on Tuesday, 15 May 2007 17:17:26 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:32:37 UTC