Re: Proposals for revision of the Plain Language SC proposals for WCAG 2.1

(Apologies in advance for a long response)


Hi Michael,

Thanks for this. I agree with pretty much all that you have written here.
Some expanded thoughts:

> WCAG 2.0 Success Criteria are written as testable criteria for
objectively determining if content satisfies the Success Criteria.


This is huge.
​ The company I work for (Deque Systems) employs over 50 Subject Matter
Experts​ on 2 continents, and a large part of what many of us do is site
and page evaluations for our clients. Of course Deque is interested in
making as many SC machine testable as possible (see our Open Source aXe
engine and free browser plugins - http://www.deque.com/products/axe/).
However, we also recognize the critical importance of human assessments and
evaluation in the larger process, and we have numerous internal checks and
balances (as well as a specific methodology) that we apply to our testing
process in an effort to ensure that, even if we have 4 or 5 evaluators
processing a large corpus of web-pages for one client, that we arrive with
consistent and accurate reports for each page we evaluate, and across the
collection of pages.

Proposed Success Criteria that have as many exceptions as they do
'requirements' are going to prove difficult to evaluate at scale,
irrespective of the originating source (COGA, LV, Mobile, EPUB, other),
especially when the exceptions are vague or hard to determine.

For example (and not to pick on a specific SC), look at the following
exception for #30 Plain Language (Minimum):
​

​​
When a passive voice or a tense (other than present tense) is clearer.
​​
Other voices or tenses may be used when it has been shown, via user
testing,
​​
to be easier to understand, friendlier, or appropriate.


   1. ​*"​*
   *When a passive voice or a tense (other than present tense) is
   clearer.​"​*

   ​     ​
   (Clearer to who
   ​m​
   ?
   ​ How can this be measured? What = "clearer"?​ I'm not trying to be a
   jerk, that is an honest question
   )

   2.
*"​ Other voices or tenses may be used when it has been shown, via user
   testing,*
   *​..."*

        (User Testing by whom? If I have one "user" test the editorial
   content, and​ they assert that a different voice or tense is "better" (or
   "clearer"), does that meet the bar? Or does it have to be a minimum of 5
   testers, or 10, or 8 out of 10, or...? Does the user-testing need to be
   applied across all web-pages on a site, or will a representative sampling
   suffice? Some of our clients - think universities or large financial
   institutions as 2 examples - will have thousands of unique pages: does each
   one need to be "user-tested" to meet this proposed "A" Success Criteria?
   Has the Task Force or this Working Group contemplated the additional
   non-trivial cost that this may have on evaluation and compliance?)

   3. ​
   *"...to be easier to understand, friendlier, or appropriate."*


*​ *     (Who determines "appropriate"? What is "friendlier"? Some members
   of this Working Group have called me "evil", yet I like to think that I am
   actually a pretty friendly guy (and I think that others on this WG who know
   me would back that up - I hope!), and so this is an extremely subjective
   term to be applying to a Success Criteria that is supposed to be
*testable and​
   repeatable.*


​Again Mike, I am not trying to be negative here, I am trying to be honest
and pragmatic: *please remember that I am on *our* side*.


> I think we may have a few instances where the technology neutrality is
being questioned. It is also seen as a problem when we hypothesise
techniques that rely on new untried or predicted technologies as our
primary way to assure sceptical people that the SC can be met.


Yes, I am struggling with this at times as well. Since this thread was
about Plain Language in the first place, I will continue with that example,
but only to illustrate the concern and NOT to criticize the extensive work
already invested in this proposed SC.

Under the Testing section for #30, it states:

However, it is expected there will be a natural language processing testing
tool by the time this goes to CR. (It is already integrated into a tool by
IBM.)


   1. Basing
   testing​ upon an anticipated future tool is simply not pragmatic or
   scale-able. I could say that "*it is expected that self-driving cars
   will replace human-driven taxis within my lifetime*" and I likely
   wouldn't be wrong, but I still haven't provided any actual 'data' - it is a
   feel-good statement that is not actionable.

   We had a similar problem with WCAG 1.0 and the "Until user-agents..."
   clause, and it is my recollection that this was a point of serious
   discussion during the WCAG 2.0 process: that clause was removed in 2.0.

   2. IBM has done some very exciting work in the area of cognition and
   A.I. (Artificial Intelligence = Watson), and having played with IBM's
   Content Clarifier (http://contentclarifier.mybluemix.net/#/landing) I
   think it is even more exciting than self-driving cars :-)

   BUT... even IBM admits this is still a "Demo", and more critical, it
   appears to be a licensed software tool (the only one I am currently aware
   of - but please speak up if there are others). Proposing a solution that
   relies on one vendor's tool will receive (justifiably) significant
   push-back from many (if not most) of the W3C membership, as this appears to
   favor one vendor over others (so early on, from an editorial perspective,
   these types of statements, while useful to understand as background
   information, should not be part of a neutral testing process, whether in
   this SC or others).



​Finally, you wrote:

> Probably the majority of our problems revolve around testability.
Although Understanding WCAG talks of using accessibility experts and
involving users with disabilities in the testing, these are not required. I
believe that all of those objecting to many of our SCs are very involved in
and aware of *the current reality* where it is assumed that conformance to
WCAG will be done by non-experts using either automated test tools or by
making judgements that require no expert knowledge and no heavyweight
processes like user testing.​


​I think that this is a very accurate summary of the problem we are facing
today.

<Rant>
I don't really enjoy being the guy with the bucket of cold water, but
having been inside of W3C process for close to a decade, I like to think I
have a good handle on what will fly and what will meet resistance.

The over-arching issue (to me) is one of scale: I think asking for, and
getting, many of these proposed SC into a small, special-use site is
extremely achievable, but for large corporations that are attempting to
meet WCAG with sincerity (and in the early days of WCAG 2.1 compliance will
be strictly voluntary unless legislation(s) respond quickly​, or a regional
law is currently written to reference "the most recent"), and so we must be
mindful of the effort  (as well as resources) required to achieve success.
If our response to them is "*well, you need to become experts*", I can only
imagine what kind of responses we will see back. That might be sad, or
frustrating, or dis-heartening (or all of the above), but it is also a
reality we need to address.

I note as well that in at least the USA, there was an "Undue Burden" clause
in Section 508 that could be used as a "Get Out Of Jail Free Card":


*When acquiring a product, if an agency determines that compliance with any
provision of this part imposes an undue burden, the documentation by the
agency supporting the acquisition shall explain why, and to what extent,
compliance with each such provision creates an undue
burden.(**https://section508.gov/content/quick-reference-guide
<https://section508.gov/content/quick-reference-guide>)*


If our new SC are seen as too onerous to achieve, I can anticipate lots of
"Undue Burden" responses to this proposed "A" requirement, which benefits
no-one. (And lest some think that won't happen, I'll point you to
https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=23601 - see Appendix B: List
of exceptions)


My largest concern today however is over "tone" inside of the Working
Group. I've already noted how I've been called "evil" by at least one
member of the larger WG, and I have endured a number of personal attacks
simply because I've had the temerity to stand up and say "Uhm, emperor and
no clothes". If, internally, we are already eating our own, I fear greatly
for how things will proceed when the wider review that we know must happen
begins. W3C Process depends on Consensus (which is not the same as
unanimity BTW), and the first step in Consensus is *active listening. *

I implore members of this, and *all* Task Forces to listen to the feedback,
and work *with* those who are critiquing these draft SC, so that together
we can get SC that will meet the mettle, and when we receive the hard
questions that the wider review will solicit (from
non-Subject-Matter-Experts), we are ready with solid and provable
responses. Internal in-fighting at this stage is counter-productive.

Equally, TF members need to steel themselves to the fact that they will
likely not get everything they want in WCAG 2.1, for a variety of reasons.
This does not mean that the issues and User Requirements brought forward,
and the tons of valuable research that this TF has undertaken, is not
legitimate or valid, only that some of the requirements today may not be
achievable at scale, and that is a reality we must endure as well.

That sucks, but life isn't perfect.
​</Rant>​

​JF​



On Fri, Feb 10, 2017 at 5:01 AM, Michael Pluke <
Mike.Pluke@castle-consult.com> wrote:

> Hi Steve
>
>
>
> Whereas there is no such guide, and it would probably be a major challenge
> to write, I think that many of the issues that we are meeting can be
> predicted when we compare what we have with the following extract from the
> “Success Criteria” section of “Understanding WCAG”:
>
>
>
> “ Each Success Criterion is written as a statement that will be either
> true or false when specific Web content is tested against it. The Success
> Criteria are written to be technology neutral.
>
>
>
> All WCAG 2.0 Success Criteria are written as testable criteria for
> objectively determining if content satisfies the Success Criteria. While
> some of the testing can be automated using software evaluation programs,
> others require human testers for part or all of the test.”
>
>
>
> I think that there are a few of our SCs where, because of the many
> elements in them and because of some of the concepts in the wording, it is
> difficult for someone to be really certain whether the result is true or
> false when testing. The majority of WCAG 2.0 SCs are quite short and often
> contain only one clear concept. Those that are longer and have multiple
> bullets are, according to what I’ve heard, those that took an enormous
> amount of debate and re-writing before they were agreed.
>
>
>
> I think we may have a few instances where the technology neutrality is
> being questioned. It is also seen as a problem when we hypothesise
> techniques that rely on new untried or predicted technologies as our
> primary way to assure sceptical people that the SC can be met.
>
>
>
> Probably the majority of our problems revolve around testability. Although
> Understanding WCAG talks of using accessibility experts and involving users
> with disabilities in the testing, these are not required. I believe that
> all of those objecting to many of our SCs are very involved in and aware of *the
> current reality* where it is assumed that conformance to WCAG will be
> done by non-experts using either automated test tools or by making
> judgements that require no expert knowledge and no heavyweight processes
> like user testing.
>
>
>
> I think that this last issue is the one the really makes things extremely
> hard in relation to most COGA proposals. When we talk about cognitive
> issues it is all about what people may understand (clearly or at all) and
> whether tasks are too complex for them to perform, etc. None of these
> things currently lend themselves to generally available automated testing
> (and even the clever language understanding/summarising tools are probably
> not really up to providing definitive assurances that people will or will
> not be able to understand something). It is also clear that one or a few
> non-expert testers are not really going to be able to judge what is
> understandable to people with a wide range of cognitive and learning
> disabilities.
>
>
>
> All of the above is horribly depressing, but I still think that we have
> the prospect of getting a few SCs through. That will be a start in what I
> think is going to be a very long journey to really ensure that people with
> cognitive and learning disabilities are much more comfortable and effective
> when using the Web.
>
>
>
> Best regards
>
>
>
> Mike
>
>
>
> *From:* Steve Lee [mailto:steve@opendirective.com]
> *Sent:* 10 February 2017 10:22
> *To:* John Foliot <john.foliot@deque.com>
> *Cc:* lisa.seeman <lisa.seeman@zoho.com>; EA Draffan <ead@ecs.soton.ac.uk>;
> Milliken, Neil <neil.milliken@atos.net>; Thaddeus . <
> inclusivethinking@gmail.com>; public-cognitive-a11y-tf <
> public-cognitive-a11y-tf@w3.org>; Jeanne Spellman <
> jspellman@spellmanconsulting.com>
>
> *Subject:* Re: Proposals for revision of the Plain Language SC proposals
> for WCAG 2.1
>
>
>
> > only that the proposal as writ right now will have a hard time passing
> the wide review that FPWD brings, and if we cannot answer the types of
> questions I am asking now, in our more closed environment, then this SC
> will likely not make the final cut, sad as that is
>
> That makes me think what we are missing is a "guide to how to write SCs
> that are accepted".
>
> The regulars on the WCAG list have a lot of implicit knowledge and
> experience of the politics and practicabilities of the process that we
> don't all share. It seems like it could be a steep learning curve and
> combine with the current process is slowing us down from getting effective
> SCs out.
>
> Could a workshop or guide of some sort be arranged to help get us up to
> speed on these sort of issues?
>
> How about something at CSUN with remote access?
>
>
> Steve Lee
> OpenDirective http://opendirective.com
>
>
>
> On 9 February 2017 at 23:14, John Foliot <john.foliot@deque.com> wrote:
>
> Hi Steve,
>
>
>
> From my perspective, do not be confused by low levels of discussion on any
> single new SC - we are all struggling to keep up with the flurry of
> correspondence at this time.
>
>
>
> The latest PR for this new SC is simply the latest PR - it in no way means
> that the SC is "finalized" - only that it is now going to the larger WCAG
> WG for more review before it is "baked" into the 2.1 FPWD. (Note that the
> full Working Group is not copied on this email, only the COGA TF)
>
>
>
> I have a number of concerns with how this is emerging right now, including
> some centered around internationalization (for example, my early research
> shows that the use of the Passive Voice is not only common, but often
> "required" in the Japanese language, and insisting on a non-passive voice
> in that language may actually introduce *MORE* confusion for Japanese with
> learning disabilities. Surely we don't want that!)
>
>
>
> Additionally, I personally believe that statements such as "*It is
> expected that natural language processing algorithms will be able to
> conform to this automatically with reasonable accuracy.*" (Future tense)
> means that we do not have this ability today - but I am not sure, do such
> tools exist today? (Later, the draft suggests that IBM has "a tool" that
> can perform this today, but dependency on a single tool for testing is
> problematic, especially if it is a "for-profit" tool. Additionally, does
> that tool also support multiple languages? My colleague Birkir
> Gunnarsson is Icelandic - does the tool support his mother tongue as well?)
>
>
>
> NOTE - I am not for an instant suggesting that the spirit of this SC, or
> the Needs Statement that is driving it, are not valid, only that the
> proposal as writ right now will have a hard time passing the wide review
> that FPWD brings, and if we cannot answer the types of questions I am
> asking now, in our more closed environment, then this SC will likely not
> make the final cut, sad as that is.
>
>
>
> So let's get it rock-solid now, ya?
>
>
>
> JF
>
>
>
> On Thu, Feb 9, 2017 at 3:14 PM, Steve Lee <steve@opendirective.com> wrote:
>
> Yes, my bad. I forgot where I was in the process of managing these 2.
>
> The reason for my reticence was the very low level of discussion. These
> were my 1st as a SC manager and I really expect more push and shove. I
> guess that means they are good.
>
>
>
> Sorry again for the confusion due to being new to the process.
>
>
> Steve Lee
> OpenDirective http://opendirective.com
>
>
>
> On 9 February 2017 at 21:01, lisa.seeman <lisa.seeman@zoho.com> wrote:
>
> The pull request was done before Jeene made her suggestions so it is
> really too late. The issue is closed.
>
> My 2 cents - The Success criteria was pretty clear, measurable and
> testable  - more then a lot of what is in WCAG 2.0
>
>
>
> All the best
>
> Lisa Seeman
>
> LinkedIn <http://il.linkedin.com/in/lisaseeman/>, Twitter
> <https://twitter.com/SeemanLisa>
>
>
>
>
> ---- On Thu, 09 Feb 2017 20:46:03 +0200 *John
> Foliot<john.foliot@deque.com <john.foliot@deque.com>>* wrote ----
>
> Hi EA,
>
>
>
> Thanks. I don't see this as "causing trouble" - I see this as having an
> open, honest and candid discussion. We need to balance the needs of many
> disparate groups, including content authors who are not experts (and never
> will be). I've tried very hard to stay on top of the COGA requirements, and
> one of the larger take-away's I've learned is that individual
> personalization is and will be the Holy Grail for COGA issues.
>
>
>
> But we simply aren't there yet, not at anything that would scale, and I
> think we do ourselves a dis-service if we don't accept that truism today.
>
>
>
> Re: Innovation - I fully support that 100% - YES. We have a number of
> user-needs today, however the technology still isn't mature enough to start
> mandating that site-owners do "X, Y, Z", and frankly I think that if we
> ever got to the point where WCAG became that prescriptive we'd loose more
> ground then we've gained.
>
>
>
> This is one of the reasons why I suggested that for the release of 2.1,
> any User Requirement that was still unattainable at scale be none-the-less
> published as an official W3C Note, as we did with the MAUR (
> https://www.w3.org/TR/media-accessibility-reqs/) - not everything in that
> list is achievable today, but the needs still exist, and what the
> 'expectations' are have been collected and published. To my happy
> discovery, there are now technologists out there taking these Requirements
> and then working on Proof Of Concept solutions. This has to be a positive
> thing!
>
>
>
> I sort of think of it like American Football - not every play is going to
> score a touch-down, but if we are successful in moving the ball closer to
> the goal line, we're still "winning". WCAG 2.0 had little-to-nothing to
> address the needs of the core constituency of the COGA and LV Task Forces
> when it was published in 2008, and we've done a good job collecting the
> User Requirements (Gap Analysis), but I also think we've got plenty more
> plays ahead of us before we score touch-downs there. But if, with 2.1, we
> move the ball forward closer towards the goal-posts, I think we're doing
> well - the goal now isn't "the touch-down" but rather "How many yards can
> we advance forward with this play?"
>
>
>
> For me, it keeps on coming down to "Don't let Perfect be the enemy of
> Good".
>
>
>
> Cheers!
>
>
>
> JF
>
>
>
> On Thu, Feb 9, 2017 at 10:47 AM, EA Draffan <ead@ecs.soton.ac.uk> wrote:
>
> Thank you for all the trouble you have taken John,  and I certainly did
> not expect such an amazing reply this was just me researching it all a bit
> more.
>
> Apologies for causing trouble.  Lets just see if we can find a better way
> to test readability to suit all users.  Perhaps we can be a bit more
> innovative as Lisa suggested, but I appreciate we will have to make it
> robust and go through validation tests - thoughts of crowdsourcing help
> across different languages etc.
>
> Best wishes
> E.A.
>
> Mrs E.A. Draffan
> WAIS, ECS , University of Southampton
> Mobile +44 (0)7976 289103 <+44%207976%20289103>
> http://access.ecs.soton.ac.uk<http://access.ecs.soton.ac.uk/>
> UK AAATE rep http://www.aaate.net/
>
>
> ________________________________
> From: John Foliot [john.foliot@deque.com]
> Sent: 09 February 2017 16:18
> To: EA Draffan
> Cc: Milliken, Neil; lisa.seeman; Thaddeus .; public-cognitive-a11y-tf;
> Jeanne Spellman
> Subject: Re: Proposals for revision of the Plain Language SC proposals for
> WCAG 2.1
>
> TL;DR:
>    WCAG Success Criteria need to be measurable, and while Reading Scores
> have their issues, they are at least measurable and repeatable, and will be
> significantly more palpable to the millions of content authors we will be
> asking to meet this Need.
>
> ***
>
> Hi EA,
>
> Thanks for those links. After reading through them (and yes, I read all
> 3), I am struck by one of the conclusion statements of the third reference (
> https://www.ideals.illinois.edu/bitstream/handle/2142/
> 15490/why-rf-fail.html?sequence=3)
>
> "The real factors that affect readability are elements such as the
> background knowledge of the reader relative to the knowledge presumed by
> the writer, the purpose of the reader relative to the purpose of the
> writer, and the purpose of the person who is presenting the text to the
> reader. These factors cannot be captured in a simple formula and ignoring
> them may do more harm than good."
>
> While we cannot discount this expert opinion
> ​, it also leaves me wondering how we can ever hope to "standardize" and
> quantify/measure something that is clearly not scientific​? Dissecting
> the statement above:
>
>   1.  background knowledge of the reader relative to the knowledge
> presumed by the writer - unknown and unknowable at scale (i.e. sites that
> get hundreds of thousands of unique visits a day)
>
>   2.  the purpose of the reader relative to the purpose of the writer -
> again, unknown and unknowable at scale
>
>   3.  the purpose of the person who is presenting the text to the reader -
> this is the only factor apparently under the control of the content author,
> and in scope for the Web Content Accessibility Guidelines, and thus the
> only thing a WCAG SC can address.
>
> ​
> My fear here is that there seems to be 2 opposing goals that we are trying
> to meet: one is a "testable" and measurable *standard* that can be taught
> and applied​ to millions of websites (the science piece), and yet
> "writing" and writing for specific audiences is an "art" (my distillation
> and take-away of those three articles).
>
> I get "art", and art is important, but art cannot be quantifiably
> measured, it cannot be "taught" (outside of principles - the science of
> painting with oils versus drawing with charcoals), but actual "art"
> certainly cannot be standardized or measured (unless you are shopping at
> Walmart, and purchase "Pastoral Scene #3 - 40" X 60"")
>
> What do I tell a Fortune 500 company they should do, if not try and meet
> some kind of standardized reading level? When you are authoring content for
> a million people, you cannot know all of your readers. I was more
> encouraged by one of the conclusions of the Leeds paper (
> http://www.leeds.ac.uk/educol/documents/213296.pdf
> ​)​
>
>
> "In conclusion, we want to emphasize that formulas are not invalidated for
> the great majority of writing. On the other hand, what they cannot measure
> should make clear that they cannot make writing a science."
>
>
>
> So... what can we do?
>
> In controlled environments, you may be able to ensure more attention is
> applied to the "art" side of the problem statement, but for a company like
> Tesco, what would you tell Tesco's editorial staff (where there is more
> than one editorial person) to do? Tesco proudly claim to serve "...millions
> of customers a week in our stores and online." (https://www.tescoplc.com/
> about-us/our-businesses/), and so all they can "know" about their
> audience is generalized data (likely determined by user-logs on their
> website, coupled with possible surveys and focus-group testing).
>
> Large organizations like this also generally use Style Guides (AP, The
> Oxford Style Manual, The Chicago Manual of Style, etc. See:
> https://en.wikipedia.org/wiki/List_of_style_guides) as well as often they
> will have internal "Voice of the company/Voice of the client" guides as
> well (when I worked at JPMC they had such an internal document).
>
> However, outside of specialized environments, getting any kind of buy-in
> from the millions of content creators out there will necessitate some form
> of measuring methodology, and while reading scores have their issues, they
> seem to be better than nothing at all, and so I am concerned that COGA
> experts are pushing back on this. I will posit that Jeanne's re-writes,
> while not 100% "perfect", brings the authoring solution a lot closer to
> what is required based upon the research provided.
>
> Add to that the increasingly litigious environment around web
> accessibility, and ask yourself how will a judge (who is neither an
> accessibility expert nor a language expert) going to judge whether a site
> "fails" or not? (For this reason alone we need standardized testing of some
> fashion or other, and if not readability scores, then what?)
>
> JF
>
> On Tue, Feb 7, 2017 at 8:31 AM, EA Draffan <ead@ecs.soton.ac.uk<mailto:ea
> d@ecs.soton.ac.uk>> wrote:
> I vote 3
>
> Holiday reading or references!
>
> Readability: The limitations of an approach through formulae (this paper
> has a definition of readability)
> http://www.leeds.ac.uk/educol/documents/213296.pdf
>
> Another very readable discussion about readability and the limitations of
> scales,  but also measuring sentence length by number of words etc.
> http://www.impact-information.com/impactinfo/Limitations.pdf
>
> old one
>  https://www.ideals.illinois.edu/bitstream/handle/2142/
> 15490/why-rf-fail.html?sequence=3
>
>
>
> Best wishes
> E.A.
>
> Mrs E.A. Draffan
> WAIS, ECS , University of Southampton
> Mobile +44 (0)7976 289103 <+44%207976%20289103><tel:%2B44%20%280%
> 297976%20289103 <%2B44%20%280%297976%20289103>>
> http://access.ecs.soton.ac.uk<http://access.ecs.soton.ac.uk/>
> UK AAATE rep http://www.aaate.net/
>
>
> ________________________________
> From: Milliken, Neil [neil.milliken@atos.net<mailto:neil.milliken@atos.net
> >]
> Sent: 06 February 2017 23:13
> To: lisa.seeman
> Cc: Thaddeus .; public-cognitive-a11y-tf; Jeanne Spellman
> Subject: Re: Proposals for revision of the Plain Language SC proposals for
> WCAG 2.1
>
> I vote 3
>
>
> Kind regards,
>
> Neil Milliken
> Head of Accessibility & Digital Inclusion
> Atos
> M: 07812325386 <07812%20325386><tel:07812325386 <07812%20325386>>
> E: Neil.Milliken@atos.net<mailto:Neil.Milliken@atos.net><mailto:
> Neil.Milliken@atos.net<mailto:Neil.Milliken@atos.net>>
> http://atos.net/iux
> http://atos.net/accessibilityservices
> @neilmilliken
>
>
>
> On 6 Feb 2017, at 22:35, lisa.seeman <lisa.seeman@zoho.com<mailto:l
> isa.seeman@zoho.com><mailto:lisa.seeman@zoho.com<mailto:lisa
> .seeman@zoho.com>>> wrote:
>
> I am changing my vote to 3 as well.
> The SC as it  is incredibly easy to write testing tools for. there are a
> few open source  language processing tools that you can use to count cluses
> actureltys. Testing against a word list is also something that exists
> already in restricted language tools and is very easy to program. It cant
> be that we need to have a worse SC and use archaic reading level tools
> because WCAG are to set in their ways to accept any new technology.
>
> All the best
>
> Lisa Seeman
>
> LinkedIn<http://il.linkedin.com/in/lisaseeman/>, Twitter<
> https://twitter.com/SeemanLisa>
>
>
>
>
> ---- On Mon, 06 Feb 2017 21:55:36 +0200 Thaddeus .<
> inclusivethinking@gmail.com<mailto:inclusivethinking@gmail.com><mailto:
> inclusivethinking@gmail.com<mailto:inclusivethinking@gmail.com>>> wrote
> ----
>
> I vote 3
>
> On Feb 6, 2017 11:08 AM, "lisa.seeman" <lisa.seeman@zoho.com<mailto:l
> isa.seeman@zoho.com><mailto:lisa.seeman@zoho.com<mailto:lisa
> .seeman@zoho.com>>> wrote:
> We had issues with reading level , for example the word "mode" is a lower
> reading level than "hot or cold" . the lower reading level is much harder
> to understand.
> The reason to go with Jeanne's proposal is because wcag _might_ find it
> more testable. This would only be, in my opinion, because they have not
> bothered read the whole proposal and testability section  (or they do not
> want new tools) Also i am not sure it is more testable in different
> languages and that is essential for WCAG. Wordlists requiremnts however,
> can work easily in any language and wordlists can be automatically
> generated by parsing a few sites.
>
> I agree that the "unless..."  clause is only human testable but that it
> very typical for wcag.
>
>
> I want to suggest three options
>
> 1 -  we retract our current pull requests and put these in instead
>
> 2 - we go with the current pull requests. If they fail and the comments
> are hard to address then we go with Jeanne's
>
> 3 -we go with the current pull requests. we can revisit this if needed
>
> My vote is 3, to go with the current wording and see what happens
>
>
> All the best
>
> Lisa Seeman
>
> LinkedIn<http://il.linkedin.com/in/lisaseeman/>, Twitter<
> https://twitter.com/SeemanLisa>
>
>
>
>
> ---- On Mon, 06 Feb 2017 20:00:24 +0200 Jeanne Spellman<jspellman@
> spellmanconsulting.com<mailto:jspellman@spellmanconsulting.com><mailto:
> jspellman@spellmanconsulting.com<mailto:jspellman@spellmanconsulting.com>>>
> wrote ----
>
> A group of us at The Paciello Group (TPG) have been meeting every week in
> January to comment on the WCAG 2.1 proposals.  Because we test WCAG 2.0 all
> day, every (business) day, we have a lot of experience with both the
> language of WCAG and the testing of WCAG.  What we decided this week is
> that we want to focus our efforts toward helping COGA TF draft success
> criteria that will get into WCAG 2.1 and will accomplish most of what you
> want -- even if it is phrased differently.
>
> We started with the proposals that we thought would be the least
> controversial to the WCAG WG to include.  I looked at the Plain Language
> proposals and did my best to look at the needs identified by COGA TF, and
> craft language that I thought would be acceptable to the WCAG WG and be
> included in the first draft version of WCAG 2.1.
>
> The wording is quite different, but in my opinion, addresses the needs
> identified.  I chose reading level, because it is internationally
> standardized, and there are automated tests already available.  When I look
> at Technique  G153: Making the text easier to read
> https://www.w3.org/TR/WCAG20-TECHS/G153.html , it covers most of the
> items that the COGA TF identified.
>
> Issue 30 Proposal:
>
> Understandable Labels:  Navigation elements and form labels do not require
> reading ability greater than primary education level.  (A)  [link to WCAG’s
> definition of primary education level from UNESCO standard]
>
>
> Issue 41:
>
> Understandable Instructions:  Headings, error messages and instructions
> for completing tasks do not require reading ability greater than lower
> secondary education level.  (AA)  [link to WCAG’s definition of lower
> secondary level from UNESCO standard]
>
>
> Delta 3.1.5 (rewrite of existing WCAG 3.1.5)
>
> Understandable Content: Blocks of text either:  (AAA)
>
> ·        have a reading level no more advanced than lower secondary
> education, or
>
> ·        a version is provided that does not require reading ability more
> advanced than lower secondary education. [links to WCAG’s definitions of
> lower secondary education and blocks of text]
>
>
>
>
>
>
>
>
> Atos, Atos Consulting, Worldline and Canopy The Open Cloud Company are
> trading names used by the Atos group. The following trading entities are
> registered in England and Wales: Atos IT Services UK Limited (registered
> number 01245534), Atos Consulting Limited (registered number 04312380),
> Atos Worldline UK Limited (registered number 08514184) and Canopy The Open
> Cloud Company Limited (registration number 08011902). The registered office
> for each is at 4 Triton Square, Regent’s Place, London, NW1 3HG.The VAT No.
> for each is: GB232327983.
>
> This e-mail and the documents attached are confidential and intended
> solely for the addressee, and may contain confidential or privileged
> information. If you receive this e-mail in error, you are not authorised to
> copy, disclose, use or retain it. Please notify the sender immediately and
> delete this email from your systems. As emails may be intercepted, amended
> or lost, they are not secure. Atos therefore can accept no liability for
> any errors or their content. Although Atos endeavours to maintain a
> virus-free network, we do not warrant that this transmission is virus-free
> and can accept no liability for any damages resulting from any virus
> transmitted. The risks are deemed to be accepted by everyone who
> communicates with Atos by email.
>
>
>
>
> --
> John Foliot
> Principal Accessibility Strategist
> Deque Systems Inc.
> john.foliot@deque.com<mailto:john.foliot@deque.com>
>
>
> Advancing the mission of digital accessibility and inclusion
>
>
>
>
>
> --
>
> John Foliot
>
> Principal Accessibility Strategist
>
> Deque Systems Inc.
>
> john.foliot@deque.com
>
>
>
> Advancing the mission of digital accessibility and inclusion
>
>
>
>
>
>
>
>
>
>
>
> --
>
> John Foliot
>
> Principal Accessibility Strategist
>
> Deque Systems Inc.
>
> john.foliot@deque.com
>
>
>
> Advancing the mission of digital accessibility and inclusion
>
>
>



-- 
John Foliot
Principal Accessibility Strategist
Deque Systems Inc.
john.foliot@deque.com

Advancing the mission of digital accessibility and inclusion

Received on Friday, 10 February 2017 17:45:47 UTC