Minutes of UAWG Writers Meeting 3 August 2010

Minutes:
http://www.w3.org/2010/08/03-ua-minutes.html

IRC Log
http://www.w3.org/2010/08/03-ua-irc

Text of minutes:

    [1]W3C

       [1] http://www.w3.org/

                                - DRAFT -

    User Agent Accessibility Guidelines Working Group Teleconference

03 Aug 2010

    See also: [2]IRC log

       [2] http://www.w3.org/2010/08/03-ua-irc

Attendees

    Present
           Jim, Greg, Jeanne, Kim, Kelly

    Regrets
    Chair
           SV_MEETING_CHAIR

    Scribe
           greg

Contents

      * [3]Topics
          1. [4]Principle 2
          2. [5]2.1.3 Accessible Alternative
          3. [6]2.1.1
          4. [7]2.1.2
          5. [8]Accessibility API reference
          6. [9]2.1.5
          7. [10]2.1.6 Properties
          8. [11]2.1.4
          9. [12]2.1.7 timely communication
         10. [13]4.1.12 Specify preferred keystrokes:
         11. [14]3.13.1 & 2
         12. [15]3.11
      * [16]Summary of Action Items
      _________________________________________________________

    <trackbot> Date: 03 August 2010

    <kford> Jeanne, can you paste a link to the doc again?

    <jeanne>
    [17]http://www.w3.org/WAI/UA/2010/ED-UAAG20-20100802/MasterUAAG20100
    802.html

      [17] 
http://www.w3.org/WAI/UA/2010/ED-UAAG20-20100802/MasterUAAG20100802.html

Principle 2

    <jeanne> [writing assignments]

    <AllanJ> 2.1.3 reference
    [18]http://lists.w3.org/Archives/Public/w3c-wai-ua/2008OctDec/0050.h
    tml

      [18] 
http://lists.w3.org/Archives/Public/w3c-wai-ua/2008OctDec/0050.html

    <jeanne> jeanne and Kim were discussing the need for a stable point
    of regard, for example, when increasing text size in a long
    document, the point of regard should be saved.

2.1.3 Accessible Alternative

    Existing wording: 2.1.3 Accessible Alternative: If a feature is not
    supported by the accessibility architecture(s), provide an
    equivalent feature that does support the accessibility
    architecture(s). Document the equivalent feature in the conformance
    claim. (Level A)

    <scribe> New proposed rewording, intent and example:

    1. If an item of the user agent user interface cannot be exposed
    through the platform accessibility architecture, then provide an
    ("separate but equal") equivalent alternative that is exposed
    through the platform accessibility architecture.

    a. Users need to be able to carry out all tasks provided by the user
    agent. The purpose of this S C is to ensure that when circumstances
    do not allow direct accessibility to some items in the user agent,
    there is an accessible option that will let them complete their
    task.

    b. For example, the user agent provides a single, complex control
    for 3-dimensional manipulation of a virtual object. This custom
    control cannot be represented in the platform accessibility
    architecture, so the user agent provides the user the option to
    achieve the same functionality through an alternate user interface,
    such as a panel with several basic controls that adjust the yar,
    spin,...

    scribe: and roll independently.

    1. If an item of the user agent user interface cannot be exposed
    through the platform accessibility architecture, then provide an
    equivalent alternative that is exposed through the platform
    accessibility architecture.

    a. Users need to be able to carry out all tasks provided by the user
    agent. The purpose of this S C is to ensure that when circumstances
    do not allow direct accessibility to some items in the user agent,
    there is an accessible option that will let them complete their
    task.

    b. For example, the user agent provides a single, complex control
    for 3-dimensional manipulation of a virtual object. This custom
    control cannot be represented in the platform accessibility
    architecture, so the user agent provides the user the option to
    achieve the same functionality through an alternate user interface,
    such as a panel with several basic controls that adjust the yar,
    spin,...

    scribe: and roll independently.

    Jan notes that they originally chose "feature" over "item" because
    all the functionality that is grouped together in the default user
    interface should also be grouped together in the alternative UI,
    rather than having to go elsewhere for one portion of it. Thinking
    of features as a whole as working or not working, rather than
    breaking it down to the level of individual controls.

    Kelly notes that in his workplace, the term "feature" is used for
    very large chunks of functionality that include controls, behaviors,
    interactions, etc.

    Kelly refers to things like "spell checking" as a feature that
    includes numerous menu items, dialog boxes, controls, hotkeys, etc.

    Jan: ATAG has high-level guidance as well, equivalent to Greg's
    point that this general idea can apply to almost anything in the
    document.

    <Jan> JR: Notes "applicability notes" construct in ATAG2.0:
    [19]http://www.w3.org/TR/2010/WD-ATAG20-20100708/#part_a

      [19] http://www.w3.org/TR/2010/WD-ATAG20-20100708/#part_a

    <Jan> JR: and
    [20]http://www.w3.org/TR/2010/WD-ATAG20-20100708/#part_b

      [20] http://www.w3.org/TR/2010/WD-ATAG20-20100708/#part_b

    This discussion started because Greg argued that 2.1.3 is just a
    single instance of the general rule that would apply to every SC in
    this document.

    That is, all *functionality* needs to be available through
    accessible user interface, but not every user interface element. For
    example, I believe that its acceptable to have an toolbar button
    that lacks keyboard access as long as the command is available
    through an accessible menu system.

    I think I sent email to the list on April 8 on this topic,
    discussing scoping and exceptions.

    I think eventually we'll be able to get rid of 2.1.3, as it will be
    redundant to some more general SC or conformance rule.

    2.1.3 Accessible Alternative: If a component of the user agent user
    interface cannot be exposed through the platform accessibility
    architecture, then provide an equivalent alternative that is exposed
    through the platform accessibility architecture.

    Intent: Users need to be able to carry out all tasks provided by the
    user agent. The purpose of this S C is to ensure that when
    circumstances do not allow direct accessibility to some items in the
    user agent, there is an accessible option that will let them
    complete their task.

    Example: The user agent provides a single, complex control for
    3-dimensional manipulation of a virtual object. This custom control
    cannot be represented in the platform accessibility architecture, so
    the user agent provides the user the option to achieve the same
    functionality through an alternate user interface, such as a panel
    with several basic controls that adjust the yar, spin, and roll...
    ... independently.

    Jeanne suggested "component". Jan suggests "functionality".

    <kford> Think about replacing component with functionality and
    appropriate wording.

    <jeanne> rssagent, make minutes

2.1.1

    <jeanne> Intent for 2.1.1

    <jeanne> Computers, including many smart phones, have accessibility
    features built into the operating system. Some well-known APIs for
    the Windows operating system are: MSAA, iAccessible2, [more]. Where
    ever technically possible, support the existing accessibility APIs.

    <jeanne> Examples:

    <jeanne> Browser A is developing a new user interface button bar for
    their Microsoft Windows product. The developer codes a call to the
    MSAA API for the functionality.

    <jeanne> We didn't try to put together the list of resources, but
    that will be needed.

    <jeanne> SC: 2.1.1 Platform Accessibility Architecture: Support an
    platform accessibility architecture relevant to the operating
    environment. (Level A)

    <kford> Issue: Ensure UAAG docus have fully updated references to
    the various accessibility APIs.

    <trackbot> Created ISSUE-72 - Ensure UAAG docus have fully updated
    references to the various accessibility APIs. ; please complete
    additional details at
    [21]http://www.w3.org/WAI/UA/tracker/issues/72/edit .

      [21] http://www.w3.org/WAI/UA/tracker/issues/72/edit

2.1.2

    My only suggestion would be to clarify in the Intent paragraph that
    this is about API for programmatic access (with assistive
    technology), rather than about all the "accessibility features built
    into the operating system".

    <jeanne> 2.1.2 Name, Role, State, Value, Description: 2.1.2 Name,
    Role, State, Value, Description: For all user interface components
    including the user interface, rendered content, and alternative
    content, make available the name, role, state, value, and
    description via an platform accessibility architecture. (Level A)

    <jeanne> The information that assistive technology requires is the
    Name (component name), the Role (purpose, such as alert, button,
    checkbox, etc), State (current status, such as busy, disabled,
    hidden, etc), Value (information associated with the component such
    as, the data in a text box, the position number of a slider, the
    date in a calendar widget), Description (user instructions about the
    component).

    <jeanne> For every component developed for the user agent, pass this
    information to the appropriate accessibility platform architecture
    or application program interface (API). Embedded user agents, like
    media players can pass Name, Role, State, Value and Description via
    the WAI-ARIA techniques.

    <jeanne> Example for browser (not complete)

    <jeanne> A browser is developing a component to search a listing of
    files stored in folders. The text box to enter the search terms is
    coded to pass the following information:Name=

    <jeanne> State

    <jeanne> STATE_FOCUSABLE

    <jeanne> STATE_SELECTABLE

    <jeanne> example for embedded media player using WAI-ARIA

    <jeanne> A media player implements a slider to control the sound
    volume. The developer codes the component to pass the following
    information to the accessibility API:

    <jeanne> Name = Volume control

    <jeanne> Role = Slider

    <jeanne> States & Values

    <jeanne> aria-valuenow

    <jeanne> The slider’s current value.

    <jeanne> aria-value-min

    <jeanne> The minimum of the value range

    <jeanne> aria-value-max

    <jeanne> The maximum of the value range

    <jeanne> Description

    <jeanne> aria-describedby = 'Use the right or left arrow key to
    change the sound volume.'

    Does the phrase "rendered content, and alternative content" in the
    SC include generated content, or do we need to add that explicitly?

    <AllanJ> to me generated content comes from CSS

    <AllanJ> it probably should be explicitly included.

    <AllanJ> we had an item in UAAG10 concerning this.

    We can address it either in the SC explicitly or in the glossary
    entry for rendered content.

    <jeanne> ACTION: jeanne to Add "generated content" to the SC 2.1.2
    [recorded in
    [22]http://www.w3.org/2010/08/03-ua-minutes.html#action01]

    <trackbot> Created ACTION-419 - Add "generated content" to the SC
    2.1.2 [on Jeanne Spellman - due 2010-08-10].

    Kelly questions why 2.1.2 and 2.1.6 are separate.

    <AllanJ> UAAG10 css generated content is 6.9 it was a P2

Accessibility API reference

    <AllanJ> Microsoft's Active Accessibility (MSAA)
    msdn.microsoft.com/en-us/library/ms971310.aspx

    <AllanJ> User Interface (UI) Automation
    msdn.microsoft.com/en-us/library/ms747327.aspx

    <AllanJ> Gnome Accessibility Toolkit (ATK)
    library.gnome.org/devel/atk/

    <AllanJ> KDE Assistive Technology Service Provider Interface
    (AT-SPI) accessibility.kde.org/developer/atk.php

    <AllanJ> Mac Accessibility API
    [23]http://developer.apple.com/ue/accessibility/

      [23] http://developer.apple.com/ue/accessibility/

    <AllanJ> Iaccessible2
    accessibility.linuxfoundation.org/a11yspecs/ia2/docs/html/ ,
    www-03.ibm.com/able/open.../open_source_windows.html ,

    <AllanJ> Accessibility API Cross reference
    www.mozilla.org/access/platform-apis

    <AllanJ> PDF Accessibility API Reference
    www.adobe.com/devnet/acrobat/pdfs/access.pdf

2.1.5

    <jeanne> SC: 2.1.5 Write Access: If the user can modify the state or
    value of a piece of content through the user interface (e.g., by
    checking a box or editing a text area), the same degree of write
    access is available programmatically. (Level A)

    <jeanne> Intent for 2.1.1

    <jeanne> Computers, including many smart phones, have accessibility
    features built into the operating system. Some well-known APIs for
    the Windows operating system are: MSAA, iAccessible2, UIAutomation,
    [more]. Where ever technically possible, support the existing
    accessibility APIs.

    <jeanne> MT that was 2.1.1 sorry.

    <jeanne> Intent for 2.1.5

    <jeanne> If the user can affect the user interface using any form of
    input, the same affect may be done with assistive technologies. It
    is more reliable for the assistive technology to directly control
    the state, rather than having to simulate controls.

    <jeanne> Examples for 2.1.5:

    <jeanne> A volume control slider in a media player can be set
    directly to the desired value, e.g. the user can speak "Volume 35%".

    <jeanne> A set box with a tri-state value, e.g. "checked, unchecked
    and mixed" where the keystrokes are different to achieve the desired
    setting, depending on the state. The user can directly select the
    value when the control is programmatic.

    <kford> Possible revision to Intent:

    <kford> If the user can affect the user interface using any form of
    input, the same affect may be done through programatic access.

    <kford> It is often more reliable for the

    <kford> assistive technology to use the programatic method of access
    versus attempting to simulate mouse or keyboard input.

2.1.6 Properties

    2.1.6 Properties: If any of the following properties are supported
    by the accessibility platform architecture, make the properties
    available to the accessibility platform architecture: (Level A)

    * (a) the bounding dimensions and coordinates of rendered graphical
    objects

    * (b) font family of text

    * (c) font size of text

    * (d) foreground color of text

    * (e) background color of text.

    * (f) change state/value notifications

    * (g) selection

    * (h) highlighting

    * (i) input device focus

    * Intent of Success Criterion 2.1.6:

    These properties are all used by assistive technology to allow
    provide alternative means for the user to view or navigate the
    content, or to accurately create a view of the user interface and
    rendered content.

    * Examples of Success Criterion 2.1.61:

    • Kiara loads a new version of a popular web browser for the first
    time. She puts her screen reader into an "explore mode" that lets
    her review what is appearing on the screen. Her screen reader uses
    the bounding rectangle of each element to tell her that items from
    the menu bar all appear on the same horizontal line, which is below
    the window's title bar.

    • Kiara is using a screen reader at a telephone call center. The Web
    application displays caller names in different colors depending on
    their banking status. Kiara needs to know this information to
    appropriately respond to each customer immediately, without taking
    the time to look up their status through other means.

    • Max uses a screen magnifier that only shows him a small amount of
    the screen at one time. He gives it commands to pan through
    different portions of a Web page, but then can give it additional
    commands to quickly pan back to positions of interest, such as the
    text matched by the recent Search operation, text that he previously
    selected by dragging the mouse, or the text caret, rather than...

    scribe: having to manually pan through the document searching for
    them.

2.1.4

    <AllanJ> 2.1.4 Programmatic Availability of DOMs: If the user agent
    implements one or more DOMs, they must be made programmatically
    available to assistive technologies. (Level A)

    <AllanJ> • Intent of Success Criterion 2.1.4:

    <AllanJ> User agents (and other applications) and assistive
    technologies use a combination of DOMs, accessibility APIs, native
    platform APIs, and hard-coded heuristics to provide an accessible
    user interface and accessible content
    ([24]http://accessibility.linuxfoundation.org/a11yspecs/atspi/adoc/a
    11y-dom-apis.html). It is the user agents responsibility to expose
    all relevant content to the platform...

      [24] 
http://accessibility.linuxfoundation.org/a11yspecs/atspi/adoc/a11y-dom-apis.html).

    <AllanJ> ...accessibility api. Alternatively, the user agent must
    respond to requests for information from APIs.

    <AllanJ> • Examples of Success Criterion 2.1.4 :

    <AllanJ> In user agents today, an author may inject content into a
    web page using CSS (generated content). This content is written to
    the screen and the CSS DOM. The user agent does not expose this
    generated content from the CSS-DOM (as per CSS recommendation) to
    the platform accessibility API or to the HTML-DOM. This generated
    content is non-existent to an assistive technology user. The user
    agent...

    <AllanJ> ...should expose all information from all DOMs to the
    platform accessibility API.

    <AllanJ> A web page is a compound document containing HTML, MathML,
    and SVG. Each has a separate DOM. As the user moves through the
    document, they are moving through multiple DOMs. The transition
    between DOMs is seamless and transparent to the user and their
    assistive technology. All of the content is read and all of the
    interaction is available from the keyboard regardless of the
    underlying source...

    <AllanJ> ...code or the respective DOM.

    <AllanJ> • Related Resources for Success Criterion 2.1.4:

    <AllanJ> o www.w3.org/TR/SVG/svgdom.html

    <AllanJ> o www.w3.org/TR/MathML/chapter8.html

    <AllanJ> o www.w3.org/TR/DOM-Level-2-HTML/

    <AllanJ> o www.w3.org/TR/DOM-Level-2-Style/

    <AllanJ> o [25]https://developer.mozilla.org/en/gecko_dom_reference

      [25] https://developer.mozilla.org/en/gecko_dom_reference

    <AllanJ> o
    [26]http://developer.apple.com/mac/library/documentation/AppleApplic
    ations/Conceptual/SafariJSProgTopics/Tasks/DOM.html

      [26] 
http://developer.apple.com/mac/library/documentation/AppleApplications/Conceptual/SafariJSProgTopics/Tasks/DOM.html

    <AllanJ> o
    [27]http://msdn.microsoft.com/en-us/library/ms533050%28VS.85%29.aspx

      [27] http://msdn.microsoft.com/en-us/library/ms533050%28VS.85%29.aspx

    <AllanJ> o www.adobe.com/devnet/acrobat/pdfs/access.pdf

    <AllanJ> o www.w3.org/2004/CDF/

    <AllanJ> o dev.w3.org/2006/cdf/cdi-framework/

    <AllanJ> o www.w3.org/TR/CDR/

    Suggest changing "expose all relevant content to the platform
    accessibility API." to "expose all of its user interface and
    relevant content through the platform accessibility API, as this is
    the only approach which lets assistive technology interact with all
    software on the platform without having to implement separate
    solutions for each."

    Change "This content is written to the screen and the CSS DOM. The
    user agent does not" to merely "This content is written to the
    screen, but the user agent does not." as the generated content is
    not "written to" the CSS DOM, but rather it is written to the screen
    based on formatting instructions in the CSS, after it is parsed into
    the CSS DOM.

    Kelly and I both think the last sentence of the Intent can be
    removed.

    A key factor for 2.1.4 is that in many cases the DOM exposes richer
    content than can be exposed through the platform API. For example,
    the HTML DOM would expose attributes such as a link destination and
    whether or not it should be opened in a new window, which are not
    part of the generic set of properties that can be exposed through
    MSAA and equivalents.

    That is the real reason why the DOM needs to be exposed *in addition
    to* exposing content through the platform accessibility API.

    <kford> Accessibility APIs at some level are abstracting data from a
    more robust sourced.

    <AllanJ> i agree with the removal of the last sentence of the
    intent.

    <kford> A DOM will usually have more details than an API spe cific
    to accessibility can provide.

    <AllanJ> example: page with 5 links and text. UA loads all info into
    DOM, then it exposes (automatically or on request) to relevant
    sources (rendering engine, accessibiity api)

    <AllanJ> a11y api can ask what is at location x,y. or what are
    children of z element

    My comment above about 2.1.4 being about exposing the DOM in
    addition to platform API, that would change the ending of the first
    example, etc.

    <jeanne> close action-418

    <trackbot> ACTION-418 Copy proposals 3.1.4, 3.11 general intent,
    3.11.1 specific intent, 3.11.1,4 & 5 Examples, and 3.13.1 from
    minutes of 02-08-2010. Put in the Guidelines Master and the Survey
    for 5 August. closed

    <Kim> 4.6.3 Match Found: When there is a match, the user is alerted
    and the viewport moves so that the matched text content is at least
    partially within it. The user can search for the next instance of
    the text from the location of the match.

    <Kim> 4.6.3

    <Kim> Intent

    <Kim> It is important for the user to easily recognize where a
    search will start from.

    <Kim> Example: Jules is low vision and uses a magnified screen. She
    frequently searches for terms that appear multiple times in a
    document that contains a lot of repetition. It is important that the
    viewport moves after each search so she can easily track where she
    is in the document.

    <Kim> 4.6.4 Alert on No Match: The user is notified when there is no
    match or after the last match in content (i.e., prior to starting
    the search over from the beginning of content).

    <Kim> 4.6.4

    <Kim> Intent

    <Kim> It is important for users to get clear, timely feedback so
    they don't waste time waiting or, worse, issue a command based on a
    wrong assumption. It is important during a search that users are
    informed when there is no match or that the search has reached the
    beginning of the document.

    <Kim> Example:

    <Kim> Dennis uses a screen reader. As soon as he gets a message that
    there is no match he goes on to search for something else. If he
    does not get a message he wastes time retrying the search to make
    sure there is not a match.

    <Kim> 4.6.5 Advanced Find: The user agent provides an accessible
    advanced search facility, with a case sensitive and case-insensitive
    search option, and the ability for the user to perform a search
    within all content (including hidden content and captioning) for
    text and text alternatives, for any sequence of characters from the
    document character set.

    <Kim> 4.6.5

    <Kim> Intent:

    <Kim> Searching is much more useful when the user can specify
    whether case matters in a search and when the user can search
    alternative text.

    <Kim> Examples:

    <Kim> Dennis uses a screen reader. He wants to find all the
    instances of his friend Bill in a blog post about finances. He needs
    to specify case in order to avoid stopping at instances of "bill".
    Later, he searches for his friend's name in a blog post about poetry
    where the author never uses capital letters. In this instance he
    specifies that case does not matter.

    <Kim> Dennis he remembers a portion of a caption for something he
    had seen before that he wants to find. He needs to be able to search
    on the caption.

    Re 4.6.3 Intent, should address the portion of the SC about
    scrolling the window to show the match. That's important so that
    user's don't have to hunt through the document for the match. The SC
    doesn't really address "recognize where the search will start from",
    as it only provides that on successive searches, not the initial
    search.

    Re 4.6.3, it seems like this is one of those SC that is almost
    pointless, as it's hard to imagine a user agent not doing it
    already.

    <Kim> 4.6.3 rewritten Example: Jules is low vision and uses a
    magnified screen. She frequently searches for terms that appear
    multiple times in a document that contains a lot of repetition. It
    is important that the viewport moves and if necessary her screen
    scrolls after each search so she can easily track where she is in
    the document.

    Re 4.6.4 I think you should include the terrible results of real
    world cases where user agents don't do this: the user keeps
    searching through the document again and again, without realizing
    they're just seeing the same matches over and over again.

2.1.7 timely communication

    <kford> 2.1.7 Timely Communication: For APIs implemented to satisfy
    the requirements of this document, ensure that programmatic
    exchanges proceed at a rate such that users do not perceive a delay.
    (Level A).

    <kford> Intent: Conveying information for accessibility can often
    involve extensive communication between a user agent, an
    accessibility API, document object model and end user interaction.
    The objective is to ensure that the end user does not perceive a
    delay when interacting with the user agent.

    <kford> Example:

    <kford> Bonita accesses her web browser with a speech input program.
    She navigates to a web page and speaks the name of a link she wants
    to click. The link is activated with the same speed as it would be
    if a mouse had been used to click the link.

    <kford> Resources:

    <kford> Insert something about performance and classifications.

    <kford> Note: This changes wording of the SC slightly.

    <AllanJ> it drops the parenthetical (for non-web-based user agents)

    Re 2.1.7 Intent, the interaction also includes the assistive
    technology program.

    Re 2.1.7 Intent, you might end with something akin to: "Users would
    find a noticable delay between their key press and the response
    unacceptable, whether or not they are using assistive technology."

    <kford> Updated 2.1.7:

    <kford> 2.1.7 Timely Communication: For APIs implemented to satisfy
    the requirements of this document, ensure that programmatic
    exchanges proceed at a rate such that users do not perceive a delay.
    (Level A).

    <kford> Intent: Conveying information for accessibility can often
    involve extensive communication between a user agent, an
    accessibility API, document object model, assistive technology and
    end user interaction. The objective is to ensure that the end user
    does not perceive a delay when interacting with the user agent.

    <kford> Example:

    <kford> Bonita accesses her web browser with a speech input program.
    She navigates to a web page and speaks the name of a link she wants
    to click. The link is activated with the same speed as it would be
    if a mouse had been used to click the link.

    <kford> Resources:

    <kford> Insert something about performance and classifications.

    Another good example would be that a user press the tab key to move
    the focus to another button, his screen reader immediately says the
    name of that button, rather than making them wait for a second or
    two.

    <kford> Sounds good.

    <kford> Update again to 2.1.7

    <kford> 2.1.7 Timely Communication: For APIs implemented to satisfy
    the requirements of this document, ensure that programmatic
    exchanges proceed at a rate such that users do not perceive a delay.
    (Level A).

    <kford> Intent: Conveying information for accessibility can often
    involve extensive communication between a user agent, an
    accessibility API, document object model, assistive technology and
    end user interaction. The objective is to ensure that the end user
    does not perceive a delay when interacting with the user agent.

    <kford> Example:

    <kford> Bonita accesses her web browser with a speech input program.
    She navigates to a web page and speaks the name of a link she wants
    to click. The link is activated with the same speed as it would be
    if a mouse had been used to click the link.

    <kford> Arthur is browsing a web page with a screen reader. As he
    tabs from link to link, the text of each link instantly appears on
    his braille display.

    <kford> Resources:

    <kford> Insert something about performance and classifications.

    <AllanJ> kelly +1

4.1.12 Specify preferred keystrokes:

    <kford> Adding my text for this SC and such but don't want to ruin
    the dialog that's going on.

    <kford> 4.1.12 Specify preferred keystrokes:

    <kford> 4.1.12 Specify preferred keystrokes: The user can override
    any keyboard shortcut including recognized author supplied shortcuts
    (e.g accesskeys) and user interface controls, except for
    conventional bindings for the operating environment (e.g., for
    access to help). (Level AA)

    <kford> Intent:

    <kford> Some users may be able to hit certain keys on the keyboard
    with greater ease than others. Assistive technology software
    typically has extensive keyboard commands as well. The goal of this
    SC is to enable the user to be in control of what happens when a
    given key is pressed and use the keyboard commands that meet his or
    her needs.

    <kford> Example:

    <kford> Laura types with one hand and finds keys on the left side of
    the keyboard easier to press. She browses to a web page and notices
    that the author has assigned access keys using keys from the right
    side of the keyboard. She opens a dialog in the user agent and
    reassigns the access keys from the web page to the left side of the
    keyboard home row.

    <kford> Elaine's screen magnification program uses alt+m to increase
    the size of the magnified area of the screen. She notices that in
    her web browser, alt+m is a hotkey for activating a home button that
    stops her from being able to control her magnification software. She
    opens a hotkey reassignment feature in the user agent, and sets
    alt+o to be the new hotkey for the home button. Her screen...

    <kford> ...magnification software now works correctly.

    <AllanJ> Topic 3.13.1 again, new updates

    <AllanJ> • Intent of Success Criterion 3.13.1:

    <AllanJ> Users who use only the keyboard or screen readers need to
    be able to easily discover information about a link, including the
    title of the link, whether or not that link is a webpage, PDF, etc.
    and whether the link goes to a new page, opens a new user agent with
    a new page, or goes to a different location in the current page.
    This information allows the navigation of Web content quicker,...

    <AllanJ> ...easier, and with an expectation of what will happen upon
    link activation.

    <AllanJ> • Examples of Success Criterion 3.13.1:

    <AllanJ> • Robert, who uses a screen reader, needs to know whether a
    given link will automatically open in a new page or a new window.
    The browser indicates this information so he can discover it before
    he makes a decision to click on a link.

    <AllanJ> • Maria has an attention disorder, new windows opening are
    a large distraction. She needs to know whether a given link will
    automatically open in a new page or a new window. The browser
    indicates this information so she can decide not to follow a link
    that opens a new window.

    <jeanne> jeanne has put the 3.13.1 text into the document. I haven't
    done the earlier ones yet.

    <jeanne> close action-419

    <trackbot> ACTION-419 Add "generated content" to the SC 2.1.2 closed

    <kford> Anyone have an opinon on my text?

    <AllanJ> good stuff kelly

    <jeanne> ACTION: jeanne to update document and survey with Kim's
    droaft of 4.6.3, 4.6.4, 4.6.5 (see rewrites) [recorded in
    [28]http://www.w3.org/2010/08/03-ua-minutes.html#action02]

    <trackbot> Created ACTION-420 - Update document and survey with
    Kim's droaft of 4.6.3, 4.6.4, 4.6.5 (see rewrites) [on Jeanne
    Spellman - due 2010-08-10].

    <jeanne> ACTION: jeanne to update document with Kford's draft of
    4.1.12 from minutes
    [29]http://www.w3.org/2010/08/03-ua-minutes.html#item10 recorded in
    [30]http://www.w3.org/2010/08/03-ua-minutes.html#action03]

    <trackbot> Created ACTION-421 - Update document with Kford's draft
    of 4.1.12 from minutes
    [31]http://www.w3.org/2010/08/03-ua-minutes.html#item10 on Jeanne
    Spellman - due 2010-08-10].

    Kim and I were normalizing terminology related to "focus" in 3.11,
    and are ready to begin writing Intent and Examples (a few are done).
    Our work in progress is in
    [32]https://docs.google.com/Doc?docid=0ASiGLIaAlHSKZGR3d3FrbWJfMjMzZ
    HJtemhuY3o&hl=en

      [32] 
https://docs.google.com/Doc?docid=0ASiGLIaAlHSKZGR3d3FrbWJfMjMzZHJtemhuY3o&hl=en

3.13.1 & 2

    <AllanJ> Problems with 3.13.1.

    <AllanJ> SC wording

    <AllanJ> 3.13.1 Basic Link Information: The following information is
    provided for each link (Level A):

    <AllanJ> • (a) link element content,

    <AllanJ> • (e) new viewport: whether the author has specified that
    the resource will open in a new viewport.

    <AllanJ> Should ‘link’ be ‘anchor’, to differentiate from the
    ‘link’ in the HTML <head>

    <AllanJ> Anchor (Link) element content includes ‘href’, ‘title’,
    ‘target’ (opening in a new window), ‘hreflang’ (language of the
    destination page), protocol (from the href), destination file type
    (from the href), character set of the destination page. If we
    include all of these as part of ‘link element content’ the SC will
    overlap all of 3.13.2. Since all of the information is...

    <AllanJ> ...available to the UA,...

    <AllanJ> ...suggest removing 3.13.2. If the developers will go to
    the effort of exposing the target (new window) they can do all of
    them.

3.11

    Kim and I were normalizing terminology related to "focus" in 3.11,
    and are ready to begin writing Intent and Examples (a few are done).
    Our work in progress is in
    [33]https://docs.google.com/Doc?docid=0ASiGLIaAlHSKZGR3d3FrbWJfMjMzZ
    HJtemhuY3o&hl=en

      [33] 
https://docs.google.com/Doc?docid=0ASiGLIaAlHSKZGR3d3FrbWJfMjMzZHJtemhuY3o&hl=en

Summary of Action Items

    [NEW] ACTION: jeanne to Add "generated content" to the SC 2.1.2
    [recorded in
    [34]http://www.w3.org/2010/08/03-ua-minutes.html#action01]
    [NEW] ACTION: jeanne to update document and survey with Kim's droaft
    of 4.6.3, 4.6.4, 4.6.5 (see rewrites) [recorded in
    [35]http://www.w3.org/2010/08/03-ua-minutes.html#action02]
    [NEW] ACTION: jeanne to update document with Kford's draft of 4.1.12
    from minutes [36]http://www.w3.org/2010/08/03-ua-minutes.html#item10
    [recorded in
    [37]http://www.w3.org/2010/08/03-ua-minutes.html#action03]

    [End of minutes]

Received on Tuesday, 3 August 2010 22:17:36 UTC