Raw minutes from 12 July teleconf

Agenda announcement: 

Participants: Jon Gunderson (Chair), Ian Jacobs (Scribe), Al Gilman,
Harvey Bingham, David Poehlman, Tim Lacy, Mickey Quenzer, Gregory
Rosmaita, Jim Allan, Rich Schwerdtfeger

Absent: Denis

Regrets: Eric Hansen, Dean Jackson, Jon Ferraiolo, Tantek Çelik

Previous meeting: 28 June 2001

Next meeting: 19 July

Reference document 22 June 2001 Draft


1. User Agent FTF Meeting at Microsoft on 13-14 September 2001
   hosted by Microsoft (Redmond, WA).

   Please register!

   JG: We'll use this meeting to get implementations and commitments
   for implementations from developers.

   Already registered: IJ, JG, GR
   Intend to register: DP, HB, TL
   Scheduling still: RS, MQ (transportation issue)
   Regrets: JA, AG  
   Don't know: EH

   Also possible: Aaron Leventhal, Jon Ferraiolo

   Action TL: Write to WMP people at Microsoft to get someone to

   DP: Also, Jill Thomas is interested in attending. Java interface
   for Ebooks, serving content from the Web. Will be incorporating
   Java Swing classes.

   IJ: Invite people from assistive technologies.


0. Finalizing UAWG response to SVG WG.
   IJ: I want to send replies to SVG WG today or tomorrow

   IJ: I will incorporate editorial comments from JG and RS:

   RS: I could read frustration in the comments of the SVG WG. I
   understand this: the infrastructure of their tools did not take
   accessibility into account early on and so it's more of a burden to
   change designs. We need early interaction with WGs to make 
   people aware beforehand.
   RS: We should make it clearer that on some devices, there aren't
   assistive technology devices.

   Resolved: Incorporate JG and RS editorial comments.

1. [Proposal Issue 517] Proposal to address nested time containers.

   AG: I think that this proposal might be too broad. I agree that for
   animated SVG, it may be that the only independently playable media
   object is the SVG is the root. But where there are SMIL cases where
   components are independently playable, and you might want to play
   them alone. These are components that are synchronized in the
   author's play plan. Some users (e.g., users with hearing
   disabilities) might want independent access.

   AG: I would break up the issue:

     a) We may have some confusion still about what the units are for
     2.4, 4.4, and 4.5. People may read "element" as XML elements
     (as opposed to user interface units). I'm not sure we got past
     this miscommunication in the teleconference. 

   IJ: The definition of "animation" if effect-oriented.

   Refer to comments from AG:

   AG: I think that IJ's proposal is not necessary; you don't
   need to go that far. I think that in the SVG case, the subcomponents
   are infrequently not playable independently.

   AG: Suggested approach - "independently playable media object" is a
   heuristic concept. The set of media objects for which we make
   requirements always includes root time containers. But if you can
   turn subcomponents on and off, this is subject to 2.3. And then you
   get what you want. We want the difference in the case of SMIL since
   this is a loose bundling.

   GR: It might be helpful to talk to Charles and Wendy about how one
   would author this type of SMIL content.

   AG: For SVG, it may be that an animation is always atomic. We will
   have to trust them.

   IJ: How do we talk about the root of an animation?

   AG: If the SVG element as a whole is the scope of the only
   independently media object that the author create, then all of
   these requirements only apply to the root.

   IJ: I can't tell if the answer is "independently playable according
   to specification."
   GR: You might want to include an example of timing in the
   definition of conditional content.

   IJ: We already say "distinct audio sources" in 4.10. 

   JG: What markup is available in SVG to indicate that two 
   things are distinct animations.
   JA: My understanding of the SVG is that you can define separate
   time tracks. I don't know whether you can select different time
   tracks. I would assume that, provided the author created separate
   tracks, that you should be able to turn them on and off.

   AG: I think that you're describing SMIL (and not the way that the
   SMIL timing module is used in SVG).

   IJ Proposal: We define "an animation" to be an independently
   playable animation. I will ask Chris Lilley is this makes sense in
   terms of the SVG review comments.


   - Define "an animation" to be content that produces a visual effect
   and is independently playable animation. (And recognized as such.)

2. [Clarification] How operating environment requirements apply for
   embedded operating environments (e.g., Java in Windows)

   AG: I thought you should decline to clarify this. Where you have
   overlapping capabilities and nested environments, it should be on
   the UA developer to make interfaces as clear as they can.

   AG: When you have to pick your consistency, you have to
   think about it.

   AG: Java is a well-established environment with accessibility
   conventions and can be sufficient. [Scribe didn't minute SG's
   comment well.]


    - Just make suggestions:

    * You should satisfy the requirements by choosing the more
    accessible operating environment conventions. 

    * Consider consistency: if your UA is cross-platform (e.g.,
    Java), consider consistency cross-platform. If single-platform,
    consider consistency with the platform. If you've got a hybrid,
    pick one and inform the user agent.

    /* RS leaves */

3. [Proposal] Edits to text about speech output limitations.

   DP: Braille rendering is the result of content being delivered and
   then transformed into Braille. Speech rendering is a special
   application in these circumstances, typically an assisitive
   technology. We have some *speech synthesis* stuff, but don't have
   speech output for, e.g., graphically rendering information. We
   don't have any requirements for screen reading.

   GR: You get different layers of processing with Braille than
   with speech.

   GR: Maybe we should call it "speech generation". Speech synthesis
   implies intelligence, speech generation does not.

   AG: I don't think we should strain to encapsulate what we do
   provide. We don't really cover the case of a voice browser.
   The document doesn't include comprehensive requirements for, e.g.,
   voice browsers.

   MQ: We're talking about the rendering of text with speech.

   DP: We don't talk about the rendering of text through speech. We
   talk about the rendering of speech markup. 

   DP: Please make it clear what speech means.

   IJ: The document distinguishes "voice" (input) from "speech"
   (output). I note that in Guideline 1 prose, "speech input"   
   needs to be fixed. 
   MQ: People may think speech means input.

    - We have no *requirements* specific to Braille rendering.
    - Change "speech" to "synthesized speech output" in the
    - Include a reference to the synthesized speech outputs
      in the limitations section. 

4. For rendered content/content.

  /* Long debate about the meaning of rendered content */

      - Split "content" label into "all content" and "rendered content"

5. [Proposal] Minimum size for fonts for checkpoint 4.1

   Adopt Al's proposal with editorial changes:

6. [Proposal] Checkpoint 10.4 highlight requirement and image maps

Issues not covered

- Issue 516.

- [Proposal] Since content focus and user interface focus are
   required, make characteristics normative

- [Clarification] Checkpoint 6.5 (alert of changes to content) does
   not apply to style changes

- AT survey/telecon compilation and summary

- CR and PR requirements

Completed actions

1.RS: Send pointer to information about universal access gateway to
the WG. Source:



2.GR: Review event checkpoints for techniques

3.GR: Rewrite different markup (list of elements) that 2.9 applies to,
for clarification.

Ian Jacobs (ij@w3.org)   http://www.w3.org/People/Jacobs
Cell:                    +1 917 450-8783

Received on Thursday, 12 July 2001 16:02:49 UTC