Raw minutes of 16 November UAWG face-to-face at AOL

Hello,

Minute are available in HTML form [1] and are
quoted below as text.

 -Ian
-- 
Ian Jacobs (jacobs@w3.org)   http://www.w3.org/People/Jacobs
Tel:                         +1 831 457-2842
Cell:                        +1 917 450-8783


               Minutes from 16 November UAWG face-to-face at AOL

   Nearby: [1]Agenda · [2]Meeting page

      [1] http://www.w3.org/WAI/UA/2000/11/ftf-agenda.html
      [2] http://www.w3.org/WAI/UA/2000/11/ua-meeting

Participants

   At AOL: Bijal Shah (Netscape), Debbie Fletter (Accessibility point
   person at AOL), Jon Gunderson (Chair), Denis Anson, Eric Hansen, Rich
   Schwerdtfeger, Al Gilman, Ian Jacobs (Scribe), Harvey Bingham, Scott
   Totman (AOL)

   Phone: David Poehlman, Mickey Quenzer, Jim Allan, Gregory Rosmaita
   (after lunch)

Introduction

   JG: Please note that if we don't have implementation experience, we
   will have to spend time at Candidate Recommendation status

   EH: I would not want the document to get stale in CR for 6 months.

   RS: This is a complex document. Few UAs will conform when the
document
   becomes a Recommendation. If there are really sticky issues, we
should
   push them off to the next version

   AG: A guidelines document is somewhat different from a lower-level
   technical specification. It's not clear that W3C understands how to
   handle guidelines entirely. I think it's ok to have some checkpoints
   be future-looking.

   DA: We need to remember that this document is to promote
   accessibility, and we shouldn't sacrifice accessibility in order to
   get the document out faster.

   JG: If we know about a problem but don't have solutions, we may want
   (in another document) to spend time in CR to get developer input.

   MQ: It is possible to note in the document which issues are important
   but not entirely implemented yet.

Issues

  [3]Issue 321 Equivalency relationships and the wording of checkpoint
2.3

      [3]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#321

   Refer to [4]email from Al on bindings.

      [4]
http://lists.w3.org/Archives/Public/w3c-wai-ua/2000OctDec/0312.html

   JG: I think there are two things going on in parallel
    1. WCAG says to create, ATAG says help people create them, UAAG says
       make them available
    2. Hierarchical definition track and use of precise language.

   DP: Ascii art as braille is not very helpful

   EH: WCAG requires text equivalents for non-text content. But
generally
   speaking, an equivalency target doesn't have to be accessible.

   AG:
    1. It is critical that user agents implement the format. Talking
       about author's intent is problematic (and how to capture it in
the
       format). We want the user agent to inspect the markup and to
offer
       substitutes. But the user agent needs to give the user the
       ultimate choice.
    2. I think we are in agreement on requirements, but not language.
       There are some cases where polarity is clear (e.g., IMG/alt). But
       in the discussion of equivalence, we are also addressing the case
       (e.g., SMIL), where the "ruling" case is not clear.

   JG: I think in PF they are moving away from specific markup to more
   general solutions that also benefit accessibility.

   EH: I think that some requirements such as making all alternatives
   available (including those that don't have clear accessibility
   imlications such as alternative languages) fall under checkpoint 2.1.
   I think that the definition of equivalency is an assertion about
   accessibility of pairs of content.

   RS: I think the bottom line is text equvalents.

   AG: Because of cases like smil, where the accessibility impact may
not
   be in the markup, if you don't capture the general case in the UAAG,
   you miss the special case of accessibility. I think that alternative
   languages is an accessibility requirement.

   EH: Refer to my comments to the SYMM WG (Member-only?) about needing
   additional markup to identify accessibility content explicitly. If
the
   markup is insufficient, you can't have rationale support for
   accessibility.

   AG: I think that you should make the link to accessibility not in the
   definitions but in the checkpoints or at the guideline level.

   EH: If you unbind the terms from the accessibility implication, the
   WCAG definition of non-text element falls apart. It doesn't handle,
   e.g., ascii art and scripts. These consist of text characters but are
   "non-text elements". So if you define text element as only being
   composed of text characters, you break the WCAG definition. If you
are
   willing to tinker with WCAG language, you can shift the accessibility
   criterion to other definitions or spell it out in the checkpoint.

   AG: Fuzzy definition not a big problem for WCAG because they are
   speaking to the human author. If there were something that the UA had
   to do automatically in software, we would have a problem since the
   definition is not good enough. But we don't have that problem for
   alternatives.

   IJ: But we do for 1.5, for example.

   AG: But then you can talk to authors again (the UA doesn't have to do
   anything).

   RS: We need to be able to have access to the equivalent so that we
can
   render it in other modes. In the UAAG, we have no control over what
   the author did. I think we should refer to WCAG and address the
   problems there.

   DA: I think our problem here is that we are talking about things with
   linguistic content.

   EH: Another way of talking about text elements is that it has a
   quality of "rendering independence". The trimodal approach is not
   totally open, not total independence.

   AG: I think the improvements that we're talking about are good and
   should be in WCAG 2.0. But for UAAG 1.0, you need to move forward.

   EH: I wouldn't use the word "equivalent" however - I would use
another
   term.

   IJ: We already proposed "alternative"

   AG: Check out how ATAG uses "alternative". That should work.

   /* AG notes that people are reading glossary entries as definitions
*/

   IJ: So rationale for broader 2.3:
     * Some languages have inadequate markup
     * Some languages are designed to use more general markup
       (equivalents not just for accessibility)

   Resolved:
     * Broaden 2.3 to include all recognized alternatives. This is
       broader than WCAG requirements to ensure that the user has
access.

   Action IJ, EH, AG: Propose new definitions for terms in question
   (equivalence, text element, etc.)

  [5]Issue 322

      [5]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#322

   Refer to issue 321.

  [6]Issue 323 Using accessibility APIs rather than standard APIs to
make
  non-W3C based content accessible

      [6]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#323

   RS: I think you can differentiate standard APIs used for keyboard and
   mouse access to support physical disabilities from the actual
   rendering to the screen (e.g., in the case of Java or vector
   graphics). The accessibility API provides the same information in a
   better format than the output API.

   IJ: Note that 1.2 still helps ATs that use an offscreen model. Be
   careful about removing the requirement to use the standard output
   APIs.

   RS: MSAA and Java are about device-independent access (at the
   component level), access to pre-rendered content. They do not support
   the standard OS features for mobility access - that's the role of the
   application.

   JG: We can split 1.2 into input APIs and output APIs (support for
   higher-level first, otherwise standard output APIs).

   AG: So ATs that don't implement MSAA lose.

   RS: There are cases where MSAA doesn't support all text. They're
   fixing this.

   IJ: Note that in Princeton we explicitly decided to require all
   standard APIs, and suggesting higher-level APIs (not requiring them).

   RS: Only recently did MSAA add access to element content. The Java
   approach: we knew we had to run across several operating systems, so
   we created an API to access text independent of platform. In that
   particular case, you have a solution that is the "only solution" for
   that platform.

   Proposal: Change 1.2 to be "Use accessibility APIs, or if you don't,
   use standard device APIs".

   RS:
     * Use the accessibility APIs for the target platform (e.g., Java,
       Windows, etc.)
     * Where those APIs don't provide access to all content, use the
       standard system APIs for output (e.g., drawing text) or when they
       do not support the system APIs for mobility access. For Java, you
       would have to do this for mobility access features, system
       high-contrast settings, and fonts.
       DA: MSAA doesn't handle mobility access features: sticky keys
       RS: For input: MouseKeys, SerialKeys, StickyKeys, RepeatKeys,
       BounceKeys, ... For output: high-contrast and font size, font
       family.

   AG: There's an issue about the integration of input functionalities
   with output functionalities.

   RS draws a diagram showing input that goes through device-independent
   layer, then system access feature layer. Output goes through system
   access features layer first, then device independent layer.

   RS: UAs must:
     * Not circumvent the standard device-independent later.

   JG: So we are saying:
     * Don't circumvent standard input device APIs.
     * Use accessibility APIs, or std output device APIs where they
don't
       do the job.

   RS: Support accessibility APIs plus the DOM. Where they don't support
   text, use system APIs. Note that lots of people implement the
   accessibility APIs.

   AG: If the UA supports two ways to get at the information, that's
   enough.

   RS: Support system access features (e.g., sticky keys, etc.) [Covered
   by 5.9].

   RS: JDK 1.4, when it comes out, will support all of DOM Level 2.

   AG: Distinguish three classes of API: MSAA/Java (access), DOM, device
   APIs.

   Resolved:
     * Change 1.2 to require implementation of available standard
       accessibility APIs, and where these APIs do not provide required
       functionality (by this document), support standard device APIs.

   AG: Make clear that information available as text must be available
as
   text to the ATs (bits written to the screen don't count). Make the
   text case a clear example.

   IJ: Note that the Note needs to be edited in light of this. And the
   part about not bypassing the standard output API is deleted.

   AG: I think that the fact that the spec supports access to
information
   through both the access level and at the DOM level is part of the
   reasoning why all of the info needn't be available at the device
   level.

   RS: Need to ensure that there's documentation about how to have
access
   to these APIs.

  [7]Issue 324 How do developers interpret the phrase "appropriate for a
task"
  in checkpoint 6.2

      [7]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#324

   /* Scott Totman arrives */

   IJ: Are we, in effect, requiring all conforming user agents to
   implement one or more W3C specs?

   JG: People can create xml interfaces to documents in formats that are
   not in a w3c format.

   IJ: How do you reduce the instances where UAAG requirements need
   interpretation?

   DP: WCAG requires use of accessible formats. Adobe is trying hard to
   make a user agent that is responsible for rendering their content. I
   think they fall within our purview, but their format may not fall in
   the scope of WCAG.

   RS: Note that the latest release of MSAA doesn't support access to
   tables.

   JG: Recently I was playing with SPSS (a statistics package) and you
   could output the results as HTML. One idea is to require output in at
   least one W3C format.

   IJ: That makes it an authoring tool.

   AG: Is a piece of software that doesn't implement at least one W3C
   spec really a Web app?

   AG: One approach is to say that we don't have enough experience with
   the accessibility process for PDF (e.g., WCAG 1.0 doesn't cover) and
   therefore that shouldn't be our focus today.

   HB: XSLT could be a valid way to get around this, but even after the
   transformation you might have an inaccessible result due to lack of
   information in PDF to begin with.

   RS: We are starting to see formatting objects on the Web...

   AG: I agree with Ian - to what extent can we write this document to
be
   format-independent, and to what extent should it push people to use
   W3C formats. This is a balancing act we are stuck with. I think it's
a
   practical problem that for W3C formats, we have access to the specs
   and it's easier to be clear about general principles. Yes, we'd like
   to write functional requirements to help Adobe promote accessible
   practices, but it's not so clear that we are in a position to write
   general, clear requirements.

   AG: The realities that we're looking at are like APIs: W3C formats
are
   like APIs - they deliver the best engineered solution today for
   accessibility. We should have something in here to promote those
   formats.

   HB: We have problems of W3C Recommendations are in conflict.

   IJ: Note that "available" has some wording around it in techniques
   about implementation schedules.

   AG: You shouldn't have to go to the Techniques document to get this
   information.

   JG:
     * If you use W3C specs, conform to them.
     * If you don't support a W3C format, support an accessible format.

   EH: We could say that we don't have specific criteria for identifying
   what is "appropriate for a task" and leaving as it.

   HB: These specs need to be open.

   IJ: I have a problem saying "accessible spec" since we don't have a
   spec that explains what that means.

   AG: You need the format plus the software. You can say in a Note that
   the developer needs to implement functionalities in the manner of W3C
   specs.

   EH: This reminds me of our discussion about the scope of our repair
   requirements. When you talk about outputting PDF as HTML, that's a
   repair functionality.

   AG: Support format that can conform to WCAG.

   IJ: Issues at stake:
    1. P2: Implement formats that allow WCAG-conformant content
    2. P2: Conform to implemented W3C Recommendations
    3. Note: Support deprecated features (legacy requirements)
       AG: Developers will do this anyway because of user base.
    4. Note: Implement the latest version (improved accessibility, we
       hope) that includes accessibility features.
       RS: Support the version that has the latest accessibility
features
       (which may not be the very latest version of the spec).
    5. Note: Define "available" a little better
       IJ: This seems to come down to time schedules.

   AG:
     * Conform
     * Use w3c formats
     * Use formats that enable wcag-conformant authoring

   Action IJ: Draft new language for 6.2

   /* 12:30 Lunch */

  [8]Issue 325 Checkpoint 5.5: API notification of content change in one
  viewport that causes change in another

      [8]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#325

   Resolved: This is covered as part of 5.5. Include as a Note or
   technique.

   AG: The example is in the DOM Level 2 Mutation events module.

   JG: MSAA is another example: it sends events to ATs when content
   changes.

   RS: The association between viewports is not part of DOM Level 2.

  [9]Issue 326 What if the standard APIs do the wrong thing?

      [9]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#326

   Resolved [10]per 323.

     [10] http://www.w3.org/WAI/UA/2000/11/minutes-20001116#issue-323

  [11]Issue 327 Add requirement for support of charset expected of each
API?

     [11]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#327

   AG: Proper character encoding is required for proper text handling.

   AG: This could be a requirement that is included in a general
"conform
   to specs" requirement. Otherwise, I think this needs to be a separate
   requirement for handling text properly, and that is very important
for
   accessibility.

   Resolved: Include a P1 requirement for proper support of character
   encodings for each supported API. You can't break text.

   Action IJ: Get wording from Martin for this requirement (e.g.,
   "conform", "implement", etc.)

  [12]Issue 328 Checkpoint 4.12: "Words" per minute bounds do not scale
  internationally.

     [12]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#328

   IJ: One option is to make the bounds informative for English.

   GR: If we do, we should get input from non-English speech engines to
   suggest other bounds.

   GR: Some speech synthesizers allow rate control in terms other than
   "words per minute"

   /* Question of 5% increments */

   MQ: With JFW, you can use page down (granular navigation)

   MQ: Some of this depends on hardware (e.g., incremental changes)

   DP: I think it would be a clear requirement for software synthesizers
   to provide different granularities of rate control.

   DP: Now that my technology allows me to change rates, I do this quite
   often.

   Resolved:
     * Requirement: Allow the user to configure playback rate according
       to the range offered by the speech synthesizer.
     * Informative:
          + Rate depends on language
          + Provide some example ranges for English and other languages
          + Tell people that for ease of use, need to have granularities
            for control (e.g., big jumps or small jumps, ability to
            change rate on the fly).

  [13]Issue 329 Checkpoint 2.7: Clarification required about boundaries
of
  "recognized but unsupported"

     [13]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#329

   Refer also to [14]issue 362

     [14] http://www.w3.org/WAI/UA/2000/11/minutes-20001116#issue-362

   IJ: Top down:
     * Is perfect support for a language that the user doesn't
understand
       an accessibility problem?
       DA: This could be a problem for users with cognitive
disabilities.
       One idea is to allow the user to say "don't give me content in
       these languages".
     * Support for language, but resources not available
     * Support for language, but language specified by author unknown
     * No support for language

   IJ: There are different issues for graphical rendering and speech
   rendering. For graphical, encoding (should be) sufficient to tell UA
   which character (though UA may not have glyphs). For speech, need
more
   than encoding information, need natural language information.

   Apparent requirements:
     * Alert that there is lack of support for content in some language
     * Indication in context of where lack of support occurs
     * Skip over content in a language that isn't supported

   IJ: (Phill Jenkins comment #3): Why is this an accessibility issue?

   AG: This is an issue for speech users more than cognitive users (due
   to serial access).

   IJ: (Phill Jenkins comment #4): What if UA/AT doesn't know what
   languages are supported?

   AG: They can allow pass-through if they want. This is user control -
   the user needs to be able to say "don't pass this off".

   Resolved: Delete "marked up in a recognized but".

   Notes:
     * This is one switch, not a per-language requirement
     * There may be cases where the conforming UA supports a language
and
       a speech synthesizer does not, or vice versa.

  [15]Issue 330 Definition: Natural language / Writing system / Script

     [15]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#330

   Resolved:
     * Add "script" to the glossary. Point to Unicode definition.
     * In checkpoint 7.5, talk about script, not natural language.
     * In checkpoints 2.7 and 8.5, leave natural language

  [16]Issue 331 Add a requirement for configurability based on natural
language
  preferences?

     [16]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#331

   AG: This would be nice to have, but may be too big a new requirement.
   This is all available in CSS2, by the way.

   Resolved:
     * Add notes to checkpoints for speech rate and text rendering (4.1)
       indicating that the developer should consider per-language
       configurability.
     * In definition of profile, mention per-language profiles.
     * The WG recognizes that per-language configurability is a
usability
       gain.

   GR: We should talk to the I18N WG at the plenary in 2001.

  [17]Issue 332

     [17]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#332

   Proposed: A P1 requirement that the user must be able to choose from
   all available dictionaries and to override author-specified and user
   agent repair of unmarked up natural language.

   JG: HPR 2.5 already does this. I suspect that JFW does this as well.

   IJ: For graphical rendering, NN lets me choose encodings.

   AG: Lynx allows you to force encodings as well.

   EH: Is this a requirement just for users with disabilities? Or does
it
   affect everyone equally? Not sure that this should be a P1
   requirement.

   EH: I don't think I support the P1 level because, in part, we should
   be expecting WCAG-conformant content.

   MQ: I don't think this is a P1 requirement. It's not a disability
   issue.

   RS: I don't think that this is a P1 requirement.

   IJ: Who feels that this is an accessibility issue specifically: AG,
   DA, GR, DP.

   Is not an accessibility issue: BS, ST, RS, MQ, HB

   EH: I could support a P3 requirement, but not if several types of
   support are required (charsets, etc.)

   GR: We do have a requirement for access to all content.

   IJ: Strong support for speech P2 requirement for dictionary
selection:
   DP, GR, DA, AL

   Mild support: EH, IJ

   No/Low support: Everyone else...

   IJ: Note that 4.14 is about configuration, not control.

   Resolved: P2 requirement for configuration of preferred dictionary.

   JG: I think we should spend more time on this in the next version of
   the guidelines. Also, current technology is already doing this.

  [18]Issue 333 Checkpoint 4.2: Clarification required about what "all
text"
  means

     [18]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#333

   Resolved: This is a clarification and the WG supports the proposal:
     * The UA may use another font family for text content that can't be
       rendered in the user's preferred font family.

  [19]Issue 334 Checkpoint 7.5: Input to search capability not always
"plain
  text" (may be speech, braille)

     [19]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#334

   IJ: I think that this is not a search functionality (string
matching),
   but an input method issue (that will vary greatly).

   JG: We require standard I/O, so I think this is covered.

   AG: There are technologies like SoundEx that do phonetic matching,
but
   we haven't include such a requirement in this document.

   Resolved: This requires a clarification - matching within the
   character set of the document. For information about input, refer to
   the API checkpoints.

  [20]Issue 335 Checkpoint 9.5: Need to consider international input
methods in
  single-key requirement

     [20]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#335

   JG: We don't have a requirement for single-key character input.

   Resolved: No change to requirement. Perhaps add clarification that
9.5
   is not about character input.

  [21]Issue 336 Checkpoint 9.2: Delete "accessibility" from "OS
accessibility
  conventions"?

     [21]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#336

   EH: Change this to P2 and saying more strongly "Do not" instead of
   saying "Avoid".

   JG: Note that 5.8 is a P2 to use OS conventions.

   DA: You should be able to provide other input configs if they are
   better.

   GR: There are two things going on:
    1. Avoid conflicts with system conventions
    2. Don't mess with bindings explicitly for the purpose of
       accessibility.

   IJ: We might add at the end of the checkpoint "for input".

   EH: You can better justify the P1 by narrowing the scope this way.

   AG: My problem with saying "for input" is that you are creating a
   total input/output division and that's not how GUIs work. You
   shouldn't interfere with some output features either (that may work
   with input configs, e.g., sound sentry).

   AG: It's important to support conventions of the OS even if they are
   not specifically for accessibility (e.g., F1 bound to help) - that
   standardization promotes accessibility.

   RS: The default input config should respond to OS accessibility
   conventions.

   JG: Might add a note to 5.8 that using the std keyboard config
   promotes accessibility.

   EH: Is there a danger that access features might be so broad as to
   impose burdens on AT developers (who would have less space to work
   in).

   Resolved:
     * In 9.2, change "avoid" to "do not provide" (or some similar
       wording that conveys a "must" requirement).

  [22]Issue 337 Conformance: Implementing the standard API for the
keyboard
  "after IME"

     [22]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#337

   AG: I think that on some platforms, there may be several APIs for the
   keyboard.

   IJ: Then use the plural in 1.3.

   Proposed:
     * Change from singular to plural in 1.3 (standard APIs for the
       keyboard).
     * Add note to highlight the international scenario.

  [23]Issue 338 Editorial: Edits to Guideline 1 prose re: easy access

     [23]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#338

   Resolved: Incorporate editorial change.

  [24]Issue 339 DOM Level 2 requirement for HTML since returned to
Working
  Draft

     [24]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#339

   RS: Proposed:
     * Use DOM Level 2 Core for access to HTML content. No HTML module
       support in UAAG 1.0.
     * When DOM Level 2 HTML module becomes a Recommendation,
       publish a new UAAG that includes an HTML module requirement. This
       avoids forcing user agents to implement a DOM Level 1 HTML spec
       with known bugs.
     * If DOM Level 2 HTML module goes to Rec before we go to PR, then
we
       will include that requirement at PR.

   Action IJ: Talk to the Director about this proposal

  [25]Issue 340 Editorial: Use "refer to" for references, otherwise
"see" for
  informative cross-refs.

     [25]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#340

   Resolved: Adopt the proposal

  [26]Issue 341

     [26]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#341

   Refer to [27]issue 329

     [27] http://www.w3.org/WAI/UA/2000/11/minutes-20001116#issue-329

  [28]Issue 342 Editorial Checkpoint 3.7: Clarification to checkpoint
wording

     [28]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#342

   Action IJ: Deal with it.

  [29]Issue 343 Editorial: Checkpoint group header for multimedia
checkpoints
  v. continuous-time

     [29]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#343

   Action IJ: Deal with it.

  [30]Issue 344 Conformance: Delete reference to Internet Media Type

     [30]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#344

   AG: I think that in the content type section, it's useful to talk
   about trimodal rendering

   Resolved: Delete the reference to RFC2046. These are about data
types,
   not the user experience.

  [31]Issue 345 Checkpoint 1.1: Is requirement concrete and observable?

     [31]
http://server.rehab.uiuc.edu/ua-issues/issues-linear-lc2.html#345

   JG: We've had three comments on this one:
     * Greg Lowney: Problem with all-or-nothing approach
     * How do you observe that requirement has been met?
     * Part about re-implementing input methods is still unclear.

   JG: Some options:
     * Leave as is (with clarifications)
     * Narrow the scope to core functionalities (as we have done for
       keyboard bindings)
     * Delete

   DA: Consider voice input: Naturally Speaking is close to hands-free.
   ViaVoice requires some mouse/keyboard input.

   JG: Full voice input may not be in scope for this document.

   JG: We wanted keyboard access for all functionalities. If we reduce
   the requirements for other input devices (mouse, voice), we will
   address some requirements from users.

   IJ: Is full functionality through the mouse an accessibility
   requirement?

   DA: Yes. Some people only have access through head pointers (and
   cannot use voice input).

   JG: You can exclude voice from your conformance claims.

   IJ: It might be useful to have a "voice" content label (but that
would
   the the first input in this section, so I'm not so sure...)

   GR: I would prefer to see 1.1 stay as is. In the conformance claim,
   have the developer have to specify which input APIs are supported.

   AG: It's P1 to have all functions available through an API. If you've
   got the keyboard API there, the fact that you add another interface
   that does some of the functions should not degrade you from having
   Single-A conformance.

   RS: If the OS is controlled by voice, your application should also be
   controlled by voice, period. If it's controlled by keyboard and
mouse,
   then those should be the input APIs.

   IJ: That's covered by checkpoint 1.2.

   Some ideas:
     * Require everything through the keyboard API (including cursor
       motion, double-clicks, etc.)
     * Demote 1.1 to P2 for mouse, voice, other input APIs. This assumes
       that you can emulate everything through the keyboard API.
     * Narrow scope of 1.1 to certain functionalities.
     * Allow conformance for input methods other than the keyboard
       (namely pointing device and voice).
     * Note that people can make conformance claims with other software
       such as an onscreen keyboard.
       JG: HotMetal provide integrates an onscreen keyboard.

   AG: A voice browser running at the end of a telephone is not really
   the focus of these guidelines.

   DA: Keyboard APIs already let you do mouse things and vice versa.
   There are examples of full mouse accessibility through the keyboard
   API.

   JG: Our goal with 1.1 is to access functionalities of the UA. The
APIs
   used to do that are not as big a deal.

   EH: What if we limit the scope of 1.1 to keyboard and pointer API.

   EH: What about this: All functionality has to be available through
the
   keyboard or mouse API.

   GR: Proposed: Leave 1.1 as is, but add that to conform you may
require
   emulation via a standard API.

   RS: Not required, here's an example: you may do voice recognition,
but
   be able to do some things in a device-independent manner. If you
claim
   support for a specific input modality, then the user with a
disability
   should expect that all functions are available through that modality.

   JG: Introduce modality in the section on conformance. Talk about the
   checkpoints you must satisfy to make a claim for that modality.

   EH: I hear:
     * Rich wants to talk about devices/modalities in 1.1 and APIs in
       1.2: if you allow voice input, you must allow control of all
       functionalities through voice.
       AG: This is about the user, not the API.
       AG: You could satisfy 1.1 for three modalities by implementing
       just one standard API (for the keyboard).
       IJ: It sounds like 1.1 is about native user interface and 1.2 is
       about APIs.
       DA: You should not be able to conform for voice modality if you
       don't allow access to all functionalities through voice.

   Resolved:
     * Edit 1.1 to talk about input modalities (not APIs) in the
       conforming user agent's user interface. Leave as a P1
       all-or-nothing requirement.
     * For increased conformance granularity, include in the conformance
       section the ability to claim conformance for individual
       modalities. Keyboard is a required modality always. The other two
       in UAAG 1.0 will deal with are pointing device and voice.
     * There is no longer need for clarifying language about
       keyboard-support-through-the-mouse-and-vice-versa.
     * Remind people that they can conform with several components.

Received on Thursday, 16 November 2000 17:57:42 UTC