W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > July to September 2000

RAW MINUTES: W3C WAI User Agent Telecon 28 September 2000

From: Hansen, Eric <ehansen@ets.org>
Date: Fri, 29 Sep 2000 14:54:34 -0400
To: "UA List (E-mail)" <w3c-wai-ua@w3.org>, "Jon Gunderson (E-mail)" <jongund@ux1.cso.uiuc.edu>, "Ian Jacobs (E-mail)" <ijacobs@w3.org>
Message-id: <B49B36B1086DD41187DC000077893CFB8B43D3@rosnt46.ets.org>
Version: 29 September 2000, 1:00 hrs
WAI UA Telecon for September 28th, 2000

Telecon Time: 2:00 pm to about 3:45 pm Eastern Standard Time, USA


Chair: Jon Gunderson

Scribe: Eric Hansen

Gregory Rosmaita
Tim Lacy
Rich Schwerdtfeger
Ian Jacobs (first 45 minutes)

Jim Allan
Harvey Bingham
Charles McCathieNevile
David Poehlman
Kitch Barnicle


Review Action Items (see details below)


	1.Confirm FTF meeting date and location at AOL headquarters near
Dulles Airport on November 16-17


	1.Issue 317: Proposal to reduce number of applicability clauses for
conformance to a checkpoint

JG: The proposal establishes groups of checkpoints for conformance. The
proposal eliminates two of the applicability provisions.

GR: This relates to Issue 294, which arose from Ian's conversation with
Tantek [Mac Internet Explorer developer]. 

JG: GR, would you please post your suggestion to the list.

IJ: Eric has pointed out problems with comparing [ratings for different
types of user agents]. In order to compare conformance claims, one need to
ensure comparable features. The proposal [helps accomplish this by
associating] a label with group of checkpoints (i.e., a text set, audio set,
visual set, speech set, etc.). This makes sense. 

One will not be able to claim conformance for regarding the "core" set of

This proposal will reduce ambiguity. We have long had an implicit assumption
that if one does any audio, you have to do all the audio checkpoints. This
proposal makes that more explicit and makes it so that the claimant can see
all the relevant checkpoints for a given set/

You don't need to talk about applicability for user agents [regarding media
types]. Claims are shorter. There is less to say. There is no need to
provide a whole list of inapplicable checkpoints. It will be easy to compare
and construct claims. You can use what ever sets (labels) you want. 

I'd like to keep the text module [text set] separate from others.

Text is taken to be visual text. Next higher level [in the visual thread],
which is "visual", includes the "text set" plus others, including

JG: I proposed fewer sets. I proposed a graphical group (see memo) that
included "text":  graphical, audio, video, and speech.

IJ: I would be OK with a larger "graphical" category. [etc.]

EH: If Ian is short on time, could we put in document and then resolve
details in next draft?

JG: (Let's discuss now.)

TL: I think the proposal makes a lot of sense. I like Jon's approach. It
seems to make sense from a developer's point of view: graphical audio,
video, speech.

IJ: But what about Lynx?

JG: I think that we don't need to worry about Lynx.

IJ: (Discussion about "text" versus graphical) 

JG: I don't mind having separate sets for text, graphics, video, audio,
speech, and "all". Should color be a separate set?

GR: Should color be its own option?

JG: We could leave color out of "text" but leave in "graphics".

GR: I would rather use the term "visual" rather than "graphical".

IJ: But does not work since we have video, [which is also "visual"], etc. We
need to state what we mean by graphical.

GR: We don't want to have a "fuzzy" [ambiguous] label.

IJ: Our usage and definition of "graphical" should, hopefully, stay close to
the existing definition of "graphical". 

Resolved: Adopt Ian's proposal, include categories for text, 
graphical, audio, video and speech. Keep the definition of graphical close
to the existing definition.


2.Is checkpoint 9.1 only about visual viewports (i.e. ones that are

JG: Regarding checkpoint 9.1, Ian points out that this seems relevant to
visual rendering. Charles made additional comments. Al said that this is
about orientation. 

GR: (Had missed this one due to missing list emails.)

IJ: Does "within the viewport" make sense in non-visual mode?

GR: Yes it does, when want both visual and auditory contradiction. Problem
with Web-based mail. There is not requisite focus. 

IJ: The focus on the checkpoint.

TL: [** Inaudible ** (?)]

IJ: You would have to scroll back to it. The viewport for audio is zero

TL: If you have audio interface to tell [** Inaudible ** (?)] Go to previous
document [etc.].

IJ: Two-dimensional graphical focus has persistence. But not so with audio.

GR: (Brings up case of Javacript email. Keyboard navigation from to frame is
unrecognized by assistive technologies) 

IJ: In the case we are talking about, the viewport should hold it [focus
(?)] persistently. That makes sense visually but not auditorily.

GR: If the viewport changes [** missed **]. Persistence is in the DOM.

IJ: I agree. (Summarizes.)

JG: GR, please propose alternative language.

GR: (Proposes clarification.)

IJ: (Clarifies by reading the checkpoint.)

JG: (Discusses implications). When there was a change of selection or change
in focus, the synthesizer should speak the last object [** Not sure if I
captured correctly **]. 

For content focus change it would start to speak whatever got the focus. For
focus, the audio should mark [indicate (?)] what is the end of the

TL: I will try to clarify. The purpose of this high-tech [** Inaudible **].
Audio presentation of link, the viewport [** Inaudible **]. If you follow a
link the whole page should come into view. [** Many comments inaudible **]
Graphical and audio mix.  [** Inaudible **].

IJ: Gregory's point is well taken. The checkpoint needs a note to say "It
depends on the nature of the viewport. Whether it stays in the viewport
depends on the nature of the viewport. I can add a clarifying comment. When
shifting focus in audio, we want to hear more. 

GR: Persistence is not in the _modality_. 

IJ: Persistence of the _rendering_. 

Resolved: Add note to 9.1 explaining the differences in persistence in
visual, tactile, and audio viewports


3. Definition: Use of the term primary content


JG: Checkpoint 2.3 is only one checkpoint that used the term "primary
content". There has been lots of discussion [on the list], mostly generated
by Eric. Basically, in some cases markup is easy to recognize as either
primary or alternative. For example, the image is primary content and
alt-text is alternative. Sometime markup is ambivalent. Or there may be
issues about the author's intent. For example, an author may intend a
signing video as primary content, then text transcript is alternative. Eric
has raised this.

GR: In thinking about Eric's very good comments, I would like to [qualify]
the term 'primary content' by saying "author-designated primary content". 

Using the example of IMG and "alt-text" is not a good example; IMG can have
many other uses. It can promote interoperability. There is a bias. IMG
object can be richer. One can allow "alt" to be used for usability as well
as for accessibility. 

To me, the term primary content should designate the [meaning or concept]
that is behind the actual [symbols]. (Gives an example of "rose".)

EH: Do you mean something like "abstract meaning"?

GR: Yes. To my mind, primary content means something like "abstract
meaning". Eric's proposed terms seem appropriate. [etc.].

EH: What I found was that the term "primary content" was so loaded with
different meanings that my goal was to find a term that was clean from
unnecessary meanings. We have [two pieces of content, first] an 'equivalent'
and [second,] the 'equivalency target' -- the thing for which the
'equivalent' provides essentially the same function. They are associated by
the 'equivalency relationship'. These terms are neutral in important ways.
They say nothing about the methods of encoding or storing the information.
They say nothing about whether either is required by specification. They
steer clear of whether either is [specifically] _intended_ for people with
disabilities or for people without disabilities. One can add meanings as one
needs to [(by adding other terms) <meaning intended but not stated by Eric

GR: The approach seems elegant and extensible.

EH: I tried to make the terms as simple as possible.

GR: We have to be guarded (careful). I hear from the "grass roots" [actual
users]. They wonder "What have they [WAI] accomplished?" and "Why is the
message not getting through?" Semantics are important. You can done a good
job in making the terms neutral. You have guarded against [problematic
meanings or connotations]. Based on what you have said, I can live with

EH: (Comments)

GR: (Discussion about how Lynx handles images and text and how titles and
hyperlinks are treated in certain contexts.) 

EH: I am not sure that I understand exactly the situation you are referring
to, but it reminds me of something that I have not discussed [extensively]
on the [UA] list and which may not be important to the current document, but
which I think is important [more generally]. To my mind, the concept of text
equivalent can be more encompassing than what is commonly understood. For
example, to my mind, a text equivalent may include "title" information,
"summary" information, etc. Thus, a text equivalent may be in pieces.

GR: I think that a table summary serves an _orienting_ purpose. [Etc.]

GR: The concept of an 'equivalency target' is useful language. It is
sensible and seems to capture the necessary semantics adequately, [etc.]

JG: Then it is resolved to accept the terms proposed by equivalent,
equivalency target, etc.

GR: Very good.

JG: We need to communicate these new definitions to ATAG and WCAG.

Resolved: Adopt Eric's proposal


4. New Issue About Scope

EH: Since we have finished the list of open issues, I would like to bring up
an issue that I will treat more fully on the list. Specifically, if there
are _essential parts_ of our WAI vision of what it takes to "retrieve and
render Web content" [definition of Web 'user agent'], but which are
_outside_ the scope of the UAAG document, then we need to signal that fact
and give some rationale as to why this is the case. For example, if some
capabilities, for example, Braille output or perhaps screen magnification
(in contrast to font enlargement) or higher level screen-reader-type
navigation and output capabilities are believed to be essential parts of our
accessibility vision but are not within the scope of the document because we
rely on the existence other software such as _assistive technologies_, to
provide those capabilities, then we need to explain that. [Etc.]

5.New issue from EH on scope: Eric will propose text for scope of the
document in terms of built-in accessibility features and compatibility with
assistive technology



JG: WG members should consider this next draft their last call document for
review (prior to the last call for the public). 


Open Action Items

    1.IJ: Propose text for a note explaining the implementation issues
related to providing user agent generated content through the DOM

    2.GR: Proposed repair checkpoints

    3.KB: Submit technique on providing information on current item and
number of items in search

    4.RS: Send information (if you can) about tagging for information for
improving performance

    5.JG: Talk to Ian about adding a column to the impact matrix for
supporting authors in creating accessible content

New Action Items

    1.EH: Will propose text to be added to the guidelines document to
discuss the scope and the limitations of the current document

Completed Action Items

Eric G. Hansen, Ph.D.
Development Scientist
Educational Testing Service
ETS 12-R
Princeton, NJ 08541
609-734-5615 (Voice)
E-mail: ehansen@ets.org
(W) 609-734-5615 (Voice)
FAX 609-734-1090
Received on Friday, 29 September 2000 15:10:23 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:49:28 UTC