Raw minutes from 8 June UA teleconference

Hello,

After 6 phone calls following the meeting today, I forgot
to send out the minutes...

Here are the raw minutes from the meeting. Sorry for the
delays.

 - Ian

WAI UA Teleconf
8 June 2000

 Jon Gunderson (Chair)
 Ian Jacobs (Scribe)
 Denis Anson
 Gregory Rosmaita
 Harvey Bingham
 Dick Brown
 Jim Allan
 Kitch Barnicle
 Rich Schwerdtfeger

Regrets:

 Eric Hansen
 Tim Lacy

Next teleconference: June 15

Agenda [1]
[1] http://lists.w3.org/Archives/Public/w3c-wai-ua/2000AprJun/0406.html

1) Review of Action Items

Completed Action Items

    1.All: Review config and control definition proposal by EH

    2.All: Send situations/comments on why Checkpoint 4.8 should move to
P1

    3.Editors: Update document based on MR proposal for control and 
      configure and the resolutions made during this telecon

    4.Editors: Cross reference 4.8 and 4.10 and make clear that
checkpoint 
               4.8 for non-syntheisized speech audio

    6.JG: Contact Denis Anson related to importance of control of
objects 
          within control objects

Continued Action Items

    5.IJ: Draft a preliminary executive summary/mini-FAQ for developers. 
          (No deadline.)

    7.CMN: Propose a technique that explains how serialization plus 
           navigation would suffice for Checkpoint 8.1.

    8.GR: Look into which checkpoints would benefit from audio examples
in 
          the techniques document.
     GR: I finally got info from David.

    9.KB: Propose a minimum or preferred set of functions that should be 
          available to single command configuration
     KB: Later today.

2) Process status report
 
   IJ: Do people agree that we've made enough changes to
       return to last call?

   GR: If we've done our job, we shouldn't get many new comments.
       Also, we owe reviewers a chance to confirm our implementation.

   IJ: "Going back" means to get signoff on changes, but not to
       add new stuff. We need to be very conservative about changes;
       no new stuff if possible - only trying to clarify our existing
       requirements.

   JG: We already have a lot of good stuff in the document to get
       people started.

   Suggested path:

   1) 3-week last call.
   2) Skipping CR.
   3) Proposed Rec.

   Availability of WG during next two months:
   Good: DB, JA, IJ, KB (july better than june), GR,
         IJ, HB
   Bad : JG   


   On resolving issues in a new review:

   JG: I propose telecons instead of a ftf meeting.

   IJ: I will be working on 
     a) A new draft of the WD for the WG
     b) Proposals to finish the min requirements.
     c) Integrating the min requirements into the document.

3) Next ftf?

   JG: Discussion in WAI CG the other day about meetings.
       Our charter says we have to meet 4 times/year.
   IJ: What about September?
   JG: Yes, end of September, early October?
   DB: Sounds good.
   DA: Mid-semester for me, but not a busy semester.   
   JG: We need to find a host...
   KB: Let's block out dates among ourselves as early as possible.
   HB: Note - WAP and other mobile groups haven't been involved
       in our activities. Perhaps we should invite them to participate.

   Action JG: Send proposal to Judy proposal for WG ftf meeting
   end September. Look for a host involved with multimedia or mobile.
   
4) PR#285: Raise checkpoint 4.8 "Allow the user to configure the audio 
           volume" to priority 1
    http://cmos-eng.rehab.uiuc.edu/ua-issues/issues-linear.html#283

   DA: When there are background sounds (information) 
       drowning out voices, definitely a P1 issue. You need to
       be able to control the speech volume independently to improve
       the signal to noise ratio.

   GR: There are a lot of cases where sounds are used to provide 
       information and turning off background sounds prevents access
       to information.

   JG: You could turn off sounds and then turn it on again to
       get information. Sighted users have a "gestalt" view of the
       content - they get info from images and background sounds and
       other content at the same time, even if peripherally.

   HB: Implementation sounds like a mixer panel.

   IJ: Is there only ever one source of synthesized speech?
       If there are several sources, are we requiring the UA
       to support volume level for each one?

   JG: Recall that P2 for this checkpoint was in part based on
       the WCAG assumption that there's a P1 requirement for
       text equivalents for sampled sounds. Therefore, the 
       information would still be available, even if you
       turned off background sounds.

   GR: What if the source is streamed, so you can recover it?

   IJ: That's an implementation issue...

   GR: Just because WCAG 1.0 requires a text equivalent doesn't
       mean that it will be provided. This requirement is
       P1 for cases (at least) where the text equivalent is
       not available.
   
   IJ: Any implementation about individual control of synth speech
       volume?

   JG: Yes. Current screen reader technology only allows configuration,
       not dynamic control. Real Audio has a control on their media
       player. In MS technology, you can control each voice volume
       independently. Part of the SDK (speech development kit).

   Straw poll:
     Make P1: DA, GR, JA, HB
     Don't know: JA, KB (but leaning to P1), IJ
     Leave P2: DB

   JG: There's no guarantee that if you implement this as a UA
       that you will actually control the hardware. People could
       turn the sound way up and still not be able to hear it.

   IJ: Most speakers have physical controls.

   KB: What if you can't make the physical changes.

   IJ: Three issues:
     a) Control of volume by software (by the user).
     b) Relative volumes between audio and speech (bring
        up the voice, for example).
        HB: So we're talking about pre-mixing.
     c) Just volume control in general (for people
        who are hard of hearing).

   Resolved: Change 4.10 to be "Allow the user to 
     control synthesized speech volume relative to
     other audio sources. P1"

   DB: Are we ignoring something else than synth speech that
   should also be independently controlled? What about control
   of contrast, for example?

   IJ: Synth speech is a known important item. We will miss
      some. Contrast has not come up with reviewers...

   Proposed: Change 4.8 to P1 and say "If the operating
     system does not provide a way to change audio
     volume, allow the user to configure the audio 
     volume."

   KB: But like other requirements, this is implicit : 
       you can meet a bunch of other requirements by just
       using the OS features.

   DB: Yes, we have assumed in some other cases that people
       will go outside the UA to accomplish certain tasks.

   Resolved: Leave 4.8 text as is, Raise to P1.

   RS, DB: I can live with this.

   Action editors:
     1) Change priority; Reasoning - some users need this
     2) Emphasize use of system-level (global) volume control.
     3) Add to definition of "native support" something that
        the OS controls don't have to be in the UA's user interface
        itself. The UA uses the features, but doesn't have to
        provide direct access to those controls.

5) Review of EH's proposed definition of "configure and control"
   http://lists.w3.org/Archives/Public/w3c-wai-ua/2000AprJun/0400.html

   Resolved: Adopt this definition.

Received on Friday, 9 June 2000 01:37:27 UTC