W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1999

Re: Proposed change to priority wording

From: Ian Jacobs <ij@w3.org>
Date: Tue, 05 Oct 1999 09:43:45 -0400
Message-ID: <37FA0091.105F7BD@w3.org>
To: Denis Anson <danson@miseri.edu>
CC: w3c-wai-ua@w3.org
Denis Anson wrote:
> 
> Ian,
> 
> I don't think we are saying the same thing here.  As I read it, currently
> you can meet a guideline by either providing the functionality natively, or
> by enabling assistive technology to provide that functionality.

I would agree with that to the extent that this is done by exporting
information, using conventions, etc. However, your conformance doesn't
rely on whether a dependent UA actually does the complementary work.

We can safely remove the dependent UA part from the Priority statements
because we are now only concerned with conformance of mainstream UAs.
Therefore, they must make information available to other software, which
satisfies the second half of your statement above.

>  "Browsers"
> that provide speech, for example, are using add-on technologies (such as
> Jaws) on top of an existing browser which enables them to provide speech.
> 
> Checkpoint 7.1 specifies that content be available in alternative
> representations.  This could be taken to mean that the browser must support
> speech, Braille, and other alternative representations. 

What version of the Guidelines are you quoting? Checkpoint 3.1 says
that you must have access to alternative representations of content.
This means equivalents provided by the author. Perhaps we should clarify
it. It would seem that this is a source of confusion.

>  But the user agent
> itself probably won' t be generating the speech.  It will be making the
> content available to an add-on screen reader.  So long as this can be met by
> enabling AT, the browser would be conformant if it makes content accessible,
> perhaps via DOM.  But if it must generate speech natively, that is a problem
> for almost everyone!
> 
> As I read your proposed change, you would require speech as a native
> capability.

No. If you support speech, you must satisfy the speech-related
checkpoints.
However, there is no requirement that all UAs support speech. We do
require
support for the keyboard API, but other than that, there is no
requirement
for support of a particular input or output device or content type. Only
support for standard APIs when the UA wants to support a particular
device.

 - Ian
Received on Tuesday, 5 October 1999 09:44:03 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:49:24 UTC