- From: Robin Berjon <robin.berjon@expway.fr>
- Date: Thu, 10 Nov 2005 17:59:08 +0100
- To: Al Gilman <Alfred.S.Gilman@IEEE.org>, wai-liaison@w3.org
- Cc: www-svg@w3.org
Dear PFWG & WAI,
thank you very much for your comments. Here are the SVG WG's responses.
> This statement seems to indicate that UAAG guidelines are
> still under development. It is not clear what "Once the
> guidelines are completed, a future version of this
> specification" is refering to UAAG or SVG.
The statement has been corrected and our conformance section now
requires conformance to UAAG priority one guidelines.
> While the blanket references in Appendix D are necessary
> for completeness, the readers of this document will understand
> what you mean in various inline remarks about what processors
> should do if the pertinent sections of the UAAG10 are
> directly referenced at key points in the text.
We have added multiple references to UAAG throughout the text,
wherever we could find a section where it would be applicable.
> We are not satisfied that this document yet meets
> the requirement to demonstrate accessible usage
> in its examples.
Since examples are not normative, the SVG WG believes that they can
be added during the CR phase, as editorial improvements. We would
very much like to work with you to define a set of examples
demonstrating accessible usage better than is currently done. We have
taken some of your example suggestions into account but would like to
start work with you to push this further.
> Could not find where in the document it discusses user disabling play
> of audios and videos. No matter, this provision where mentioned
> should apply to the inhibition of animation as well and cite UAAG10
> Checkpoint 3.2 at:
> http://www.w3.org/TR/UAAG10/guidelines.html#tech-configure-multimedia
The accessibility chapter specifies that the UA should provide means
of disabling animations, audio, and video through multiple
modalities. We have modified our draft to make this point clearer.
> Examples in general use mouse-specific events. This should be
> upgraded in line with the intent-based event philosophy developed
> by the HTML WG.
Again, we would like to identify during CR which examples would
constitute good targets for improvements and work with you to make
them less pointer-specific, while retaining the same functionality.
Please note that where such events are discussed, they are pointer
events and not mouse events.
> Examples use an artifical arbitrary namespace.
> While the point is there to be made that SVG
> does not limit the markup used here, the
> preponderance of example should use realistic
> namespaces such as the mobile profile of XHTML.
Usage of arbitrary namespaces in SVG content is not only realistic
but common. In fact, should we use an example based on XHTML we
believe it would give the impression that the markup is only open to
W3C specifications, while in fact we intend to get the notion across
that authors not only can, but in fact should, use their own
vocabularies when they need to.
> <draft class="changeTo" >
> 1.6 shape. A graphics element that comprises a defined
combination of
> straight lines and curves. Specifically any instance of the
element
> types: 'path', 'rect', 'circle', 'ellipse', 'line',
'polyline', 'polygon'.
> </draft>
We have applied your suggested change.
> 1.6 User agents include graphical desktop browsers, multimedia
> players, text browsers, voice browsers; used alone or
> in conjunction with assistive technologies
> such as screen readers, screen magnifiers, speech synthesizers,
> onscreen keyboards, and voice input software [UAAG10].
We have applied this as well.
> 8.3 path data
> For semantic entities such as flowlines in flowcharts, how should
> user information such as 'title' and 'desc' be associated with
> complex paths?
> How would you suggest that we get authors to observe this practice?
It is not possible in SVG Tiny 1.2 to attach metadata to
subcomponents of a path, though this may become possible in future
versions of SVG. In the meantime we recommend that the path be
separated into smaller path elements, each with their own metadata,
knowing that they can easily have the same rendering. We intend to
add such considerations to our authoring guidelines.
> It is not clear how a User Agent or Assistive Technology is to
> associate text that it finds with the structure of the scene
> in the graphic depiction.
>
> Diagrams often embed labels, but how is this semantic
> relationship (of the label text to the conceptual object
> labeled) to be recognized from the SVG representation?
The way of indicating that sort of semantic relationship is by
putting the related items in the same group element (<g>), which can
then contain text and metadata. Again, we intend to place a
discussion of this in our authoring guidelines.
> How would authors use this notation to identify alternatives
> for audio and video objects (as you see this notation being used)?
The switch element and its usage is described in greater detail in
the chapter on structure. It can be used to identify a sequence of
alternative SVG fragments (which may in fact be completely different
in content) based on features that indicate what is used in their
content (e.g. video, audio, etc.)
> The examples read as if all delivery contexts support pointer events.
> The provision of alternatives to pointer events should be illustrated
> in the examples.
We have expanded our text indicating that pointers may not always be
available and that authors should use device independent events.
However since this is a section on how to control pointer events we
have maintained the example.
> To what extent does this specification support a two-step
identification of
> user event handling with an intent-based event defined first and then
> bound in an over-rideable way to a device-specific UI event?
SVG does not define its event handling to be different from the other
interaction vocabularies, and only has access to the facilities
defined in DOM 3 Events and XML Events.
> If an element is animated, is there a way to provide
> a static alternative other than the inital state of the
> animation?
We have added the snapshotTime attribute which can indicate the time
in an animation that can be used to create a snapshot. Also, the
switch element may be used to provide alternatives to animated content.
> How can an AT perceive that a given animated
> whatever is in the state of ongoing animation?
Assuming the given assistive technology has access to the MicroDOM,
then it can know that something is being animated by catching SMIL
timing events (begin, end, repeat animation), and can access the
animated value through the trait system.
Thank you deeply for your excellent comments, please let us know
within two weeks if they do not satisfy you,
--
Robin Berjon
Senior Research Scientist
Expway, http://expway.com/
Received on Thursday, 10 November 2005 16:59:16 UTC