- From: John Foliot <john@foliot.ca>
- Date: Thu, 3 Mar 2022 14:06:17 -0500
- To: Chuck Adams <charles.adams@oracle.com>
- Cc: "public-silver@w3.org" <public-silver@w3.org>, "w3c-wai-gl@w3.org" <w3c-wai-gl@w3.org>
- Message-ID: <CAFmg2sV2xQ93-dX=7daYz2z8k=W18oWGkOp9WTjCQu4MuuMhqg@mail.gmail.com>
Hi Chuck and team, As I slowly resurface, I wanted to note that I am personally struggling with this approach. Apologies in advance for the longish response, but as I am unable to participate in any other way at this time, I wanted to get my thoughts out there. One of the goals of 'Protocols' (in my mind) was to incorporate user needs into our spec that cannot be 'evaluated' to true or false, because fundamentally the answers would always be subjective. Per your agenda recommendation, let's look at plainlanguage.gov (which is an example I have always thought of as meeting the broad definition of 'Protocol') As I read that document, I note the following under the Guideline heading of "Write for your audience" <https://www.plainlanguage.gov/guidelines/audience/>, where it explicitly states, "*Use language your audience understands and feels comfortable with. Take your audience’s current level of knowledge into account. Don’t write for an 8th-grade class if your audience is composed of PhD candidates, small business owners, working parents, or immigrants. Only write for 8th graders if your audience is, in fact, an 8th-grade class.*" Now, using just that statement, let's apply it to your request: Propose a way to evaluate (pass/fail): i. Whether the protocol was done ii. How well the protocol was followed iii. The quality of the results To answer the first bullet point, "Whether the protocol was done" first requires that a third-party evaluator knows who the audience is, and what their knowledge and reading-skills are. It is unclear to me today how a third-party evaluator could truthfully know the answer to that question - there may be times when it is more obvious (a treatise on Nuclear Physics is likely not targeted to 8th Graders), but what for example is the intended reading level of wikipedia? Facebook or twitter? The W3C website, or educational institutions or government agencies? Banking and Insurance sites? Why, and says who? What of sites like https://www.hhs.gov, which has content targeted to both the broader population (especially in the context of COVID information), but also content intended for a very specific and highly educated audience (doctors) that requires a specialized level of skill and experience? The applicability of "Plain Language" there will vary from page-to-page based on topic and intended audience, but how would that be evaluated or reported more broadly? But let's say that somehow the site owner explicitly claims that their entire site has been authored to a Grade 8 Reading level. Putting aside the fact that COGA has consistently asserted that Reading Levels (Flesch–Kincaid, FOG/Gunning index, etc.) do not solve their needs, which (if any) of those existing test mechanisms is the right one to evaluate whether the content has been authored to the appropriate reading level? Does the use of multi-syllabic words (one of the things that will increase the reading level in Flesch–Kincaid) truly make a document harder to read? Additionally, Flesch–Kincaid is exclusively intended for English - it does not work on, say, French or Spanish content, never-mind languages such as Hebrew (R-to-L reading order), or any of the CJK languages (Chinese/Japanese/Korean), so what tools or mechanisms would be used to address internationalization issues? Next, measuring "How well the protocol was followed", which is another subjective determination. Given that the Guideline requirement is "*Use language your audience understands and feels comfortable with." - *that again is impossible to measure. For example, the statement *"9 out of 10 users can understand this sentence"* would likely be very comfortable for a typical Grade 8 student, but if that student is impacted by dyscalculia issues, that sentence would probably be extremely uncomfortable for them, due to the use of numbers. Changing "9" to "nine" may help some of those users, but not all (if I am to fully understand the impact of dyscalculia <https://www.dyscalculia.org>on individual users). Measuring comfort is subjective and individual in nature, and it cannot be scaled in any way that I can think of. Based on the above, I would then have to fundamentally question bullet 3 "The quality of the results", simply because my reading comfort level will be different than yours, or potentially anyone else reading this email. Earlier, one of the key points that I thought the group had agreed-to (Jan. 7th <https://www.w3.org/WAI/GL/task-forces/silver/wiki/Protocols#7_January_2022>) was that Protocols measured inputs, not outputs - which was (I felt) close enough. The goal there would be to look for evidence (I continue to propose formal assertions) that a protocol has been consulted and applied as intended. Plainlanguage.gov Guidelines in-and-of-themselves cannot be measured for successful outcomes, as those outcomes are too varied and too contextual. But documented evidence that the protocol is being consistently referenced as content is being authored, or that the editorial staff have been trained and apply the principles of Plain Language in their day-to-day activities, are all indicators that when content is being written, it is being written with informed guidance applied. It does not claim perfect, nor even close-to-perfect, but it does claim "informed and earnestly applied", which I will assert, is about as good as we can get. Thus the reason why I have always linked 'Assertions' to the larger 'Protocols' discussion: when an entity makes a public statement, especially one that is related to a highly regulated topic (like accessibility/human rights considerations) there is an inherent level of risk: if you say it, you better be able to prove it in court. And so for the conformance piece, I continually suggest that publicly available conformance statements related to protocols used or applied, coupled with the (legal) risk of failing to live up to your public assertion, would be the mechanism for determining successful application (i.e. Input, not output). It involves a level of trust - but I will also assert no more or less trust than expecting that text alternatives are accurate and useful (another subjective determination that will never be able to be measured in a consistent and meaningful way). Broadly speaking however, most experts could (I suggest) recognize whenever a protocol was NOT applied, and so I thus conclude sites won't be making claims they cannot back up in court. Specific to Plainlanguage.gov (and the US Federal requirement to use plain language), this is essentially the approach the US is taking today. From the Law and requirements <https://www.plainlanguage.gov/law/> section of that site: *"By October 13, 2011, agencies must: ... Write annual compliance reports and post these reports on its plain language web page."*That is the accountability piece, and the model I continue to propose for all Protocols. What would an assertion look like in WCAG 3? I believe that is an important part of the larger discussion which we've not yet discussed. Working completely off the top of my head however, I could envision something like the following (this is all straw-man, and will need to be refined if the idea is accepted): ******************** Protocol: - Plain language Reference: - https://plainlanguage.gov Effective dates: - This claim is in effect between Jan 1, 2022 - Jan 1, 2023 - (Previous claims can be found at: ___URL___) Claim: - Content written for this site is authored for users with a Grade 8 reading level or greater. - Some users may still experience difficulties with some or all of the content on this site. Steps Taken to Implement this Protocol: - The principles of plainlanguage.gov have been incorporated into the XYZ Widget Company's writing guide "The voice of the Consumer". - Corporate Editorial staff have all taken professional training/refresh learning exercises within the past 12 months. - Training provided: The Essentials of Plain Language - a nine part online training course that covers plain language principles and the Plain Language Writing Act of 2010. ( https://academy.govloop.com/watch/hDzHyqdB4T7K3fjbvuGk8B) - Random editorial content is evaluated by the XYZ Widget Company's Chief Accessibility Officer monthly to verify that the protocol is being applied correctly. Date of this report: - January 22, 2022 **************** Could this be gamed? Of course it could! Any and all of WCAG - even today - can be gamed by the content owners if that is their goal. I could do a 20 screen, *subjective* analysis of pages from a site today while studiously avoiding a single page with MathML, because I already knew that the MathML on that site was not accessible, so "don't ask, don't tell" ensures my score isn't "too low" because we simply sidestepped the MathML... Additionally today, while not part of WCAG, the Section 508 VPAT templates support the notion of content that "Partially Supports" with regard to WCAG SC, but then leaves defining "partial" to anyone - so gaming the Rec, even today, is very easy to do if that is your intention. While I absolutely believe helping to define conformance is part of our remit, I also strongly believe that enforcing compliance is outside of our deliverable today. JF On Wed, Mar 2, 2022 at 1:41 PM Chuck Adams <charles.adams@oracle.com> wrote: > Hi All, > > > > The Protocols Subgroup will meet again this Friday, March 4th at 9:00 AM > Boston Time (1400 UTC). > > > The Zoom teleconference data is provided at this link: > > > https://www.w3.org/events/meetings/bfc72cd9-fdfc-4847-826a-01afb9e3f5e7/20211105T090000 > <https://urldefense.com/v3/__https:/www.w3.org/events/meetings/bfc72cd9-fdfc-4847-826a-01afb9e3f5e7/20211105T090000__;!!ACWV5N9M2RV99hQ!ZvVx1wh89EAXhBiorHpgvdpQRlEtQPxaEsJbJ7_Q3MrxtnQGs5lwbIC34yacGIQO4g$> > > We will be on IRC using the W3C server at https://irc.w3.org > <https://urldefense.com/v3/__https:/irc.w3.org/__;!!ACWV5N9M2RV99hQ!ZvVx1wh89EAXhBiorHpgvdpQRlEtQPxaEsJbJ7_Q3MrxtnQGs5lwbIC34ybOl3ZsYw$>, > in channel *#wcag3-protocols* > > These and additional details of our work, including minutes, current, > and archived draft documents are available on our subgroup wiki page here: > > https://www.w3.org/WAI/GL/task-forces/silver/wiki/Protocols > <https://urldefense.com/v3/__https:/www.w3.org/WAI/GL/task-forces/silver/wiki/Protocols__;!!ACWV5N9M2RV99hQ!ZvVx1wh89EAXhBiorHpgvdpQRlEtQPxaEsJbJ7_Q3MrxtnQGs5lwbIC34ya-s3KL6w$> > > *** Agenda *** > > agenda+ Develop a way for a lay-person to assess whether a protocol was > followed > > > > 1. Pick 2-3 things that are likely protocols (Plainlanguage.gov, BBC > style guidelines, ?) > 2. Propose a way to evaluate (pass/fail): > > i. Whether > the protocol was done > > ii. How > well the protocol was followed > > iii. The > quality of the results > > Regards, > > Charles Adams > -- *John Foliot* | Senior Industry Specialist, Digital Accessibility | W3C Accessibility Standards Contributor | "I made this so long because I did not have time to make it shorter." - Pascal "links go places, buttons do things"
Received on Thursday, 3 March 2022 19:06:51 UTC