Your comments on WCAG 2.0 Public Working Draft of May, 2007 (2 of 2)

----------------------------------------------------------
Comment 8: GL 2.4 belongs in Principle 3
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0145.html
(Issue ID: 2019)
----------------------------
Original Comment:
----------------------------

>Comment 37:
>
>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-1206)
>
>Good; but there are a few things mis-filed
>
>Proposed Change:
>
>move 3.2.5 to a "what user can do" section containing most of the
>Principle 2 material.
>
>Move Guideline 2.4 to a "what the user needs to be able to understand"
>section containing most of what is presently Principle 3 material.
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>SC 3.2.5 isn't a problem with inability to operate, but a problem with
>disorientation.

Let's go after this one with a parable.

Suppose the sustem motors the user through some navigation and
orients them to what it has done.  But the user didn't want to go
there, and can't go back.  Is this OK?  No.

So yes, there is a problem arising from overly agressive system initiative,
but the main thing it affects is the user's control of navigation.  The fact
that they can get disoriented is neither a necessary consequence nor
the primary concern in this failure mode.

The orientation is needed so the user can control where they go.
Not an end in itself.

Consider the game of 'statues.'  Sometimes disorientation is the
user's purpose in doing something.

>Regarding Guideline 2.4, these criteria do fit in both categories.
>However, the working group feels these criterion have more to do with
>operation than they do with understanding.

Bypass blocks is operability.

Page titles is orientation; it is understandability of "where am I"

Logical focus order is understandability.  Content is presented
in an order that makes sense.

Link purpose is orientation, it is understanding "what can I do?"

2.4.5.  The meaning of 'locate' is undefined, and unclear, in this
statement.  It makes the SC unenforceably vague.

I think that you mean "multiple ways to find" or "multiple ways
to navigate to"

Find clearer language.

"labels descriptive" needs to be broken down to a more concrete
level.  But this is understanding, not operation.  It is understanding
"what can I do?"  Not being able to do it.

Guideline 2.4 treats Principle 2 as if it were "interactions are
manageable" not "objects/actions are operable."

Operable just means you *can do it.*  Not that you know when
to do it and when not to do it.  That's understanding.

2.4.7 -.9 are likewise all orientation, understanding of "where am
I?"  Not operability, not "can I invoke the relevant system action."

Al

---------------------------------------------
Response from Working Group:
---------------------------------------------

We can see the argument for why this guideline could be moved to
Principle 3, but we think there are also arguments for considering
orientation as critical to operation, and we think the disadvantages
of destabilizing the success criteria numbering at this point outweigh
the benefit of moving the guideline.

----------------------------------------------------------
Comment 9: associated text
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0221.html
(Issue ID: 2063)
Status: VERIFIED NOT ACCEPTED
----------------------------
Original Comment:
----------------------------

Prior comments whose dispositions this message replies to:

LC-955
LC-956
LC-958
LC-959
LC-971

see also disposition of
LC-1208

The case tree in 1.1.1 as currently framed is too tortured.

http://www.w3.org/TR/2007/WD-WCAG20-20070517/Overview.html#text-equiv-all

We need to start with "associated text" not "equivalent text" and then we can
specialize, not exclude, as we walk the cases.

Define "associated text" as text with a machine-recognizable
association with the
non-text content (fragment or object).

We do better developing a case tree by looking at the answers to the question
"what can I do (with this [non-text] content fragment)?"

Unless the non-text content (fragment or object) is part of a larger
fragment that has associated text or different-media alternative that
meets all requirements at that level (see response to LC-1208),

Where do = experience (image of artwork, song, etc. media object)
.. required text content identifies the object and AA content describes it

Where do = learn (picture or diagram or other item with articulable
information to convey)
.. required text content expresses the information that one could
learn from the non-text
content.

Where do = choose to browse or skip (section, including form or table)
.. associated text (heading) answers "what is _there_?)

Where do = go (hyperlink)
.. associated text answers "where will I go?"

Where do = other (controls and other widgets)
.. associated text answers "what can I do?"

Where information that text is to communicate is "no information," then
associated text may be omitted if this "content free" condition can be
recognized by AT from the encoding of the non-text content fragment.

Note: examples include spacers [, etc. as listed in the current draft]

Where do = complete a required task
.. associated text or task-enabling alternate-media content affords
the opportunity
to reach the same task outcomes.

Note that the "required task" concept allows a more general statement
of a rule that is in the present draft only represented by the "process"
discussion under conformance.  This is that if some user task is a required
prerequisite for receiving any services of the site, then if the prerequisite
is not accessible (including the option of composite alternatives rather
than just atomic alternatives), no of the content whose use is dependent
on completing the inaccessible task may be claimed to be accessible.
[yes please put in the "at the pertinent level" qualification as at present]

This includes the 'process' case where defined that all subtasks in
the process are required and also more general task graphs but where
some subtask is an unavoidable prerequisite for an outcome or another
task.

[CAPTCHA is covered under this last. -- that can be a note.]

[do not give multimedia a free ride.]

[do not excuse content "that is not presented to users" -- PFWG
and UAWG, in discussions with format specifications (in particular the
resolution of the 'override' issue in SMIL) have taken the position that
authors cannot make the final determination as to whether content is
to be shown to users or not. This falls under the "author proposes,
user disposes" management protocol for show/hide profiling of content.
In other words, authors must treat all hidden content as conditional
content.  It doesn't have to be easy, but the user should have a way
to drill down to the point that it gets displayed.]

---------------------------------------------
Response from Working Group:
---------------------------------------------

The approach proposed is interesting but is very hard to process.  We
spent much time trying to word this in simple language.  We do not
feel that the language you suggest above would be easy to follow for
many users. We believe the "case" like format with conditionals makes
it easier to parse and understand.

This approach also allows us to focus on the GOAL which is equivalent
text - and present the rest as exceptions and other lesser 'alternate
texts' that are necessary but not as good.

Regarding the "that is not presented to users" phrase:  That was added
to cover non-text content that is never presented to any users, such
as underlying databases, code and other parts of the delivered content
that are not presented to users.

----------------------------------------------------------
Comment 10: LC-1209: perceivable vs understandable
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0240.html
(Issue ID: 2072)
----------------------------
Original Comment:
----------------------------

>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-1209)
>
>This document is trying to be universal in two many dimensions at
>once.  What results is impenetrable access-babble.
>
>Proposed Change:
>
>*executive summary*
>
>break down the cases addressed by assertions (guidelines, success
>criteria, and testable assertions in test methods) using
>sub-categories both of content and of presentation.
>
>- content: genre and task
>
>genre: distinguish narratives, grids, mash-ups, etc. (see details for more).
>
>task: navigating passive content, interacting wth atomic controls,
>interacting with composite widgets, following multi-stage tasks,
>checkout from a business process, etc.
>
>- presentation: specialization of the delivery context, describably
>using terms such as in the W3C Delivery Context Overview and the IMS
>Global Learning Consortium Accessibility Specification terms for user
>preferences.
>
>*details*
>
>Testable assertions should come in collections of test conditions that
>have the same content presented in multiple alternate look-and-feel
>adaptations.  Check multiple assertions in each of these renderings.
>Don't try to make things universal across prsentation and testable at
>the same time where that is hard.  Allow yourself the ability to say
>"Must A under condition B and must C under condition D, etc."
>
>This is particularly applicable to the suitability of requred text
>explanations.  It is possible by controlling the exposure of the
>content to the tester through a prescribed dialog to make all
>necessary judgements determinable by a lay person; not an
>accessibility-knowledgeable individual.  We need to get there or go
>home.
>
>The second axis of particularization that has been missed and needs to
>be put to use is the classification of content by task and genre or
>structural texture.  The structure classes:
>
>- bag (collection of random-category items)
>
>- collection (collection of like-category items)
>
>- list (collection of items with an intrinsic semantic order --
>alphabetical does not count)
>
>- narrative (stream of text etc. that tells a coherent tale)
>
>- tree (collection of content with embedded smaller, tighter collections)
>
>- grid (two-dimensional array of content fragments)
>
>- graph (collection of articulable objects linked by significant
>relationships)
>
>.. these are structure classes that apply to regions in the content,
>and guide the applicability of information requirements -- each of
>these cases has its own proper short list of what needs to be in the
>"what user needs to be able to understand -- through user
>comprehensible media connected by machine-comprehensible
>associations."
>
>Likewise if we break out tasks such as:
>
>- managing navigation within the page
>
>- managing navigation around the site
>
>- interacting with an atomic control
>
>- interacting with a composite control (HTML forms and Windows Combo
>Boxes and Dialogs are examples of the latter).
>
>- money-free On Line Transaction Processing -- getting something to
>happen at or through the server site.
>
>- money-involving OLTP
>
>- security-sensitive OLTP
>
>We will have a much better handle on what the requirements are for
>the content.
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>Thank you for your comment.  Much of what you propose here would
>require a complete restructuring of the document or changing it into a
>note.  Without a concrete restructuring proposal that shows how this
>would happen without creating other problems, we are not able to
>evaluate this clearly.  If there are specific changes that you could
>suggest particularly around your idea of providing additional
>information on scope or application of success criteria and techniques
>we would be very happy to consider them as we move forward on evolving
>the understanding and techniques documents.  We will also keep these
>comments in mind as we are working on these documents and trying to
>provide such information as we identify it ourselves.

I understand that a complete reflow is not something you can undertake
at this point.

In my disposition responses I have tried to isolate a few concrete steps
that you can take that will make the success criteria both clearer and easier
to implement and enforce.

Specifically:

a) don't use the same words in identifying what needs to be
perceivable and what needs to be understandable.

The thing that has to be perceivable is sensory artifacts that need to be
recognizable.

What needs to be understandable is rather what you would test for the
recall of.

So talk about the medium and the message separately. The information
needs to be understandable, don't use the same terms in the
perceivable or recognizable principle -- make it the presentation or
rendering of the information. This way you can make it clear that you
have moved to a new more concrete level of viewing the interface in
the latter case.

My suggestion is just an example of how to get this across:

<quote
cite="http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0145.html">

Information and operable objects must be presented to the use so as
to be perceivable.

</quote>

This is one small change in the direction of "address two interfaces
separately" that by itself will make the guidelines more approachable
and clearer in what people need to worry about.

b) use the answer to "what can I do?" as the discriminant to break out
different cases of what the text associated to a non-text content fragment
must communicate.  See the response dealing with SC 1.1.1 for details.

http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0221.html

This allows you to state much clearer requirements for the associated
text than "equivalent except when ..." as in the present draft.  And be
more thorough in your coverage and positive in the expression at the same time.

This is one easily doable way that you can use the 'genre' (as you
have) and 'task' axes to break out cases in a way that makes the
requirements both tighter and easier to understand.

---------------------------------------------
Response from Working Group:
---------------------------------------------

Regarding suggestion (a), we have used your language (from your other
comment) for the description of the principles in the Understanding
Document (they are no longer described in WCAG).

Regarding suggestion (b),  per our discussion via telephone, we have
tried to implement this as best we can while maintaining a focus on
text "equivalents" as much as possible with exceptions where that
cannot be done.

We recognize that there are issues that you have raised related to
processes and activities, rather than individual web pages. We do not
have a solution at this point and feel that they should be addressed
in future work beyond WCAG 2.0.

----------------------------------------------------------
Comment 11: LC-959: Perceivable principle
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0241.html
(Issue ID: 2073)
----------------------------
Original Comment:
----------------------------

>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-959)
>
>The non-text content must be implemented such that it can be ignored
>anyway, even if the text equivalent provides full equivalent
>facilitation.  You can't have the video frame-change events capturing
>the AT's attention, etc.  The requirement stated here applies all the
>time, not only for pure decoration.
>
>Proposed Change:
>
>Break out into separate requirement on the "as communicated"
>representation of the content, a.k.a. the "data on the wire."
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>Although this is theoretically accurate, the assistive technology does
>not have settings to ignore all content. The current wording seems to
>best communicate the intent.

Please see the comment on 1.1.1. If you state specifically that the
information-free quality of "what the text should be equivalent to"
can be mechanically recognized by the User Agent from the encoding,
you make this requirement clearer for implementers than "implemented
such that it can be skipped." You are expecting the reader to read
your mind too much with the current language. Yes you mean what you
mean, but you haven't said what you mean. Go back to saying things
like that this fact (the absence of information to equate to) is
expressed by the semantics of the encoding.

>
>----------------------------------------------------------

>----------------------------------------------------------
>Comment 49:
>
>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-1218)
>
>the concepts of 'content' and 'presentation' as used here are
>indadequate to explain the needs of users with disabilities even with
>today's technology.
>
>Content v. Presentation
>
>Since this is one of the holy cows of accessibility, and I at least
>feel that this explanation is fundamentally problematical, I think
>this is a fertile area for discussion.
>
>Hypothetical dialog:
>
>Joe:
>
>"rendering" refers to the rendered content, like an "artist's
>rendering" of a planned building. This includes everything, the glyphs
>for making writing evident (perceivable or recognizable) and all the
>CSS-added effects such as font face, size, bold, and color that the
>authors mean to imply by 'presentation.'
>
>Moe:
>
>No, the use of 'rendering' here doesn't mean the as-rendered result,
>it refers to the process of generating that result.
>
>What's wrong with this story?
>
>Preliminaries in terms of terminology
>Both "presentation" and "rendering" are used in plain English to refer
>to either the activity that transforms the content or the resulting
>representation of the content at the User Interface. This ambiguity
>could be resolved by more surgical word-smithing.
>
>more serious
>
>The transformation is entirely determined by the technology choices of
>the author. So if we say 'presentation' is defined by the function of
>this transformation, we are at the whim of the encoding used between
>server and User Agent, and we haven't really isolated a genuine
>rhetorical or semantic distinction.
>
>If we go with "the process that generates the final representation at
>the User Interface" we find that the division between content and
>presentation is determined by the author's choice of file format and
>the implied client-side transformation to the pixel plane or audio out
>channel.
>
>To make this clear, consider that if we define the difference between
>presentation and content by the rendering transform, the
>text-in-images is image content. At least the way PF has been
>approaching things, this is an erroneous model. Because the text in
>the image will be _recognized_ by the general user as being a sample
>of written language, or some sort of code using the writing system of
>a written language, that language or symbology defines a re-purposable
>baseline for adapting the look and feel of the user experience to get
>within the capture regime of the user's sensation, perception, and
>conception.
>
>* most serious:
>
>The distinction is not only not defined by this language in the
>document, it is not definable in any stable and fair way.
>
>Starting at the sensible user interface, we can articulate different
>deconstructions, or "content hypotheses" for what is presented to the
>user. In the case of textual content, that's not a big problem, the
>Unicode character set provides a good level of abstraction that
>
>a) supports client-side personalization of the rendered form of the
>writing, and b) is readily verifiable by the author as "yes, that's
>what I said."
>
>The problem is that a lot of the connotations that are communicated by
>arrangement and other articulable features in the scene presented to
>the user are much less readily articulated or validated by the author.
>And depending on the media-critical skill and habitual vocabulary of
>the knowledge engineer doing the deconstruction, you will get rather
>different information graphs as an articulation of the 'content' of
>the scene.
>
>Contemporary web page design is more poster design that essay
>composition. And it is more interactive than what was supported in
>HTML1 or HTML2. It is the design of a richly decorated and annotated
>button-board -- a collection of clickables which the site and author
>think would appeal to the visitor based on what they know from the
>dialog so far. But that doesn't guarantee a lot of relationship
>between the different fragments mashed together into the page. If the
>page does focus on one coherent story, the page will list higher in
>search results. But that hasn't taken over practice in the field yet.
>
>So the information that is implicit in the [complete, including
>glyphs, etc.] rendered form of the content is a mix of that which is
>readily articulable, and the author will readily recognize as
>articulable, and other information that at first seems ineffable until
>a skilled media critic analyzes it and presents an analysis to the
>author or designer; only then can they recognize that there are
>articulable properties about their stuff. If the analyst is working
>from an information model of properties and relationships that are
>known to survive re-purposing and re-enforce usability in the
>re-purposed rendering, then this process of backing up appearances
>(including sonic appearances if included) with an articulable model
>will in fact enable better access, better usability in the adapted
>experience. And it will also improve usability on the whole in the
>un-adapted user experience, but more weakly. There will be fewer task
>failures for the nominal user if the model underpinnings are not
>provided than for the user who has to depend on an adapted user
>experience, an off-nominal look and feel.
>
>** so where do we go? how to do it right?
>
>My latest attempt at a summary is the presentation I did at the
>Plenary Day on 1 March.
>
>
>
>afford functional and usable adapted views
>         function: orientation --
>           Where am I?
>           What is there?
>           What can I do?
>         function: actuation --
>           from keyboard and from API
>           navigation: move "Where am I?
>         performance: usable --
>           low task failure rate confirms access to action, orientation
>           reasonable task-completion time confirms structure,
>             orientation, navigation
>
>
>
>What gets into our 'content' model, where we ask authors to encode and
>share the answers to selected questions, is a modeling decision driven
>by what we *know* about the functional differences between PWD and
>nominal use of the User Interface.
>
>In other words, we know that the "current context" -- the user's gut
>understanding of the extent of the answer to "where am I?" -- that you
>can reliably keep in foreground memory on the fly in a Text-To-Speech
>readout of a page is a smaller neighborhood than what the visual user
>perceives as the context that they are operating in. This is why there
>has to be information that supports the system prompting the user
>about where they are and where they can go -- navigation assitance --
>*inside* the page, more consistently and at a finer grain than the
>full-vision user needs.
>
>The screen reader user without Braille has more local neighborhoods
>that have to be explainable in terms of "where am I?" and hence more
>levels of "where am I?" answers in the decomposition of the page. Most
>of those levels or groupings are evident in the visual presentation,
>but we have to coach the author in articulating, labeling, and
>encoding the industrial-strength explanation of page structure beyond
>what is enforced by better/worse differences in user experience under
>nominal use conditions.
>
>This effect is readily understood in the area of orienting for "what
>can I do?" This requires good labeling for hyperlinks, form controls,
>and other presented objects that invite user action. This is pretty
>well set out in
>
>Both WCAG1 and WCAG2.
>
>The model of what information to ask for as regards intra-page
>structure is less well resolved in the community (there is less
>endemic agreement on what this information is). We have hopes for the
>"time to reach" metric analysis used in ADesigner (from IBM Japan) as
>a way to communicate when and where they need to improve the markup of
>intrapage structure.
>
>The bottom line is that we should stop trying to generate
>disability-blind specifics at a level as low as the Success Criteria
>without getting more specific about known disability-specific
>adaptations in the look and feel of the web browse experience. Analyze
>the usability drivers in the adapted look-and-feel situations, and
>then harmonize the content model across multiple adaptations of look
>and feel including the null adaptation.
>
>[one principle I missed in the Tech Plenary presentation:]
>-- the first principle, that I did brief, is to:
>
>- enable enough personalization so that the user can achieve a
>functional user experience (function and performance as described at
>Plenary)
>
>-- the second principle that we have talked about with DI but did not
>discuss in the brief pitch at Plenary is to:
>
>- enable the user to achieve this with a little perturbation in the
>look and feel as possible; in other words the equivalence between the
>adapted and unadapted user experiences should be recognizable at as
>low a level as possible. Using adaptations (equivalent facilitation)
>which require a high level of abstraction (task level) to recognize
>their equivalence is both necessary and OK if there is no less
>invasive way to afford a functional user experience. But most of the
>time there's an easier way, and while what is hard should be possible,
>what is easy should indeed be easy.
>
>Proposed Change:
>
>Apply the suggestions under "two interfaces" comments, to wit:
>Articulate requirements a) against the rendered content as it
>contributes directly to the user experience b) against the content as
>communicated between the server and the user agent -- the formal model
>created by the format and exposed by the document object that results
>from the "clean parse (see IBM suggestions there).
>
>Enumerate information requirements that have to be satisfied by the
>formal model in terms of questions that the content object must be
>able to answer in response to a predictable query (programmatically
>determined requirement).  Most of these are context-adapted versions
>of "Where am I?  What is _there_? and What can I do?"
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>As with issue LC-1204, we believe that reorganizing the guidelines in
>this fashion would be much more difficult for people to understand and
>use than our current structure.
>
>----------------------------------------------------------

Being clear about what is a human factors requirement and what is a
computer science requirement will not require a wholesale rewrite. But
it will make the document much clearer for the most important audience
for this document: the people who produce web content.

I understand that nobody ever went broke underestimating the
intelligence of the public. But you are not trying to get rich, and
you don't need to communicate with the public. You are trying to
reform web practice so as to broaden access to Web-borne information
and services. To change web practices you need to communicate with
the people who develop web content. They have the technical
background to benefit from a discussion at a more technical level
than the current draft.

Some incremental and concrete suggestions to this effect have been
included above and in the discussion of the 'perceivable' principle.

http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jun/0145.html

---------------------------------------------
Response from Working Group:
---------------------------------------------

See response to Issue 2072

----------------------------------------------------------
Comment 12: LC-963: make equivalent facilitation a principle
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jul/0147.html
(Issue ID: 2349)
----------------------------
Original Comment:
----------------------------

Reference:
http://www.w3.org/TR/2007/WD-WCAG20-20070517/Overview.html#consistent-behavior-no-extreme-changes-context

>----------------------------------------------------------
>Comment 7:
>
>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-963)
>
>This success criterion delivers less in the user experience than UAAG
>1.0, checkpoint 3.1. UAAG makes this subject to user profiling.
>
>Single-switch users, for example, rely on context changes that are
>animated by the system, not triggered one by one by the user.
>
>Low-vision users will come down on different sides of this question
>depending on how much of the content they can see at once and how much
>of the session structure they can hold in their head.
>
>Proposed Change:
>
>Best
>Make equivalent facilitation (now 4.2) a principle. Include user
>configuration of the user experience under one of the forms of
>alternative recognized. State user experience requirements separately;
>define these by reference to UAAG 1.0. State data-on-the-wire
>requrements separately.  These have two options:
>
>turnkey format -- player meets UAAG requirements directly.
>open format -- format populates machinable content model (c.f.
>rewritten 4.1) with information and actions that let UA manage and
>provide this capability.
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>Determining equivalent facilitation at this granularity so that it is
>testable is beyond the scope of  WCAG 2. User agents and assistive
>technology may present alternative renderings of the content tailored
>for the user, but the author should present a base set of behaviors in
>which changes of context are initiated only by user request.
>
>----------------------------------------------------------

Disagree.

The user should have this behavior available.  The author should not
be asked to make it the default.  WCAG should defer to UAAG for this
affordance.

This rule is as backward-looking today as was the request to make
all web pages work with scripts turned off in 1999.  Let's not make
that sort of mistake again.

The user's requirement is that the system behavior be something that
the user would expect.  There will be lots of automated tours emerging
as the Web and TV experiences merge.  Expecting a pure-pull navigation
"balance of mixed initiative" as the authored behavior is not reasonable
in this emerging Web.

Al

---------------------------------------------
Response from Working Group:
---------------------------------------------

We have added an exception to SC 3.2.5, allowing automatic changes of
context when user preferences allow.

----------------------------------------------------------
Comment 13: LC-974: need a stronger requirement than SC 2.4.3
Source: http://lists.w3.org/Archives/Public/public-comments-wcag20/2007Jul/0148.html
(Issue ID: 2350)
----------------------------
Original Comment:
----------------------------

>Comment 14:
>
>Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
>(Issue ID: LC-974)
>
>Navigation is a User Agent function enabled by structure in the
>content.  This is not a content guideline, but a user experience
>requirement handled in the User Agent.
>
>Proposed Change:
>
>Replace with a narrower provision that says "navigation paths defined
>in the content encoding correspond to the logical order information
>which is the subject of 1.3.3"
>
>----------------------------
>Response from Working Group:
>----------------------------
>
>Thank you for bringing this to our attention. We have added the
>following definition for the term "sequentially navigated":
>
>navigated sequentially
>   navigated in the order defined for advancing focus from one element
>to the next with the keyboard.
>
>We have also added the following explanation to the Intent Section
>of SC 2.4.6:
>
>The way that sequential navigation order is determined in Web content
>is defined by the technology of the content. For example, simple HTML
>defines sequential navigation via the notion of tabbing order. Dynamic
>HTML may modify the navigation sequence using scripting along with the
>addition of a tabindex attribute to allow focus to additional
>elements. In this case, the navigation should follow relationships and
>sequences in the content. If no scripting or tabindex attributes are
>used, the navigation order is the order that components appear in the
>content stream. (See HTML 4.01 Specification, section 17.11, "Giving
>focus to an element").


This is better, but the problem is that we need a stronger requirement.

The content doesn't necessarily define a linear tour for the focus.

http://lists.w3.org/Archives/Member/w3c-html-cg/2006JanMar/0115

Even if the content doesn't define a sequential tour, we need a
covering tour, a tour that gets you around to the vicinity of all the
content. This may contain branches where the user has to choose.

There needs to be a tree of chunks of content, labeled suitably for
orienting the user to "what is there."

The content should further be asked to define a tour of labeled
starting points in the content that makes all content something that
the user would expect from the labels at the starting points.

This is not necessary sequential; it may be piecewise sequential with
branchpoints that function as menus. and also a hierarchical
structure of sections useful in explaining "what is there."

The order and nesting may be implicit if it is the textual order in
the transmitted file format and the labeling and nesting are as
defined in the specification of that format.

Where possible, this implicit ordering is preferred.  But yes, it must
make sense in that order.

Al

---------------------------------------------
Response from Working Group:
---------------------------------------------

We feel that these issues are already covered by other success
criteria or are too prescriptive for all content.

1.3.2 ensures that there is a programmatically determinable reading sequence.
1.3.2 Meaningful Sequence: When the sequence in which content is
presented affects its meaning, a correct reading sequence can be
programmatically determined. (Level A)

SC 1.3.1 requires that the structure be programmatically determined.
Although we agree that such structure is desirable, WCAG does not
require that the content be structured into the kinds of chunks that
you are describing.

SC 2.2.1 ensures that all functionality is available from the
keyboard. This does not necessarily mean that all interactive
components are included in a covering tour. However, if there are
interactive components that can neither be reached by a tour or by
some other keyboard mechanism, the content would fail.

But 1.3.2 should address the problem you cite for information that can
be linear.

If you have content which is essentially a complex web (not linear or
hierarchical) - then no readers are able to find their way around and
ensure that they have visited all the nodes (unless the author
provides one).  Since this would be problem for all - it would not be
listed as an accessibility issue.

Received on Sunday, 4 November 2007 04:04:09 UTC