Re: WCAG 2.2 acceptance criteria

[Edit]

s/Silver WG/Silver TF

JF

On Thu, Mar 7, 2019 at 7:22 PM John Foliot <john.foliot@deque.com> wrote:

> Hi Jennie,
>
> No worries. Having spent a lot of time earlier-on in the COGA TF, and
> being heavily involved in the WCAG 2.1 effort, I am aware of the proposed
> SC that didn't make the first cut, and why. At one point in that process,
> there was some animosity and accusations tossed about that proved to be
> counter-productive to the larger effort, and so I was worried that we'd
> started down that path again.
>
> The issue for me isn't that future SC will require manual testing - we've
> got plenty of those already - but as noted, as we start to be more specific
> and detailed in what we ask, there needs to be a mechanism that mainstream
> content creators can readily rely on, solutions that are robust, that scale
> for large organizations, and are relatively easy to implement, with a means
> to accurately test for compliance (and again, at scale). This is doubly
> important if we want a new SC to be at level A or AA. Sadly, to my mind,
> trying to standardize on some un-achievable ideal usually results in
> frustration and disappointment: the sad reality is that we'll never
> standardize requirements that meet *all* user needs, so the best we can do
> is try and widen the circle to include more and more users.
>
> For example: Plain Language.
>
> We all know that plain language is important for comprehension, more so
> for those users with reading and comprehension disorders or impairments.
> Additionally, there are many resources out there, and subject matter
> expertise specific to that topic, that can help us define what we mean by
> Plain Language (and we've got some of that expertise in the Silver WG
> already), but how do you *test* for that requirement? Can "plain language"
> be measured? How do you know when you have succeeded and when you have
> failed? Why did you fail, and how do you remediate that failure? What is
> the relationship between the content and the intended audience? A web site
> dedicated to macrobiology (https://quizlet.com/subject/macrobiology/)
> will at some level not be in "plain language" due to the subject matter,
> yet universities in the US must be WCAG 2.0 (Section 508) compliant, so now
> what? What are the requirements when you layer in internationalization
> issues? (I'd be curious to hear form our colleague Makoto what plain
> language means in the context of Japanese content...)
>
> I know from prior discussions inside the Silver TF that we will on
> occasion spend considerable time word-smithing particular sentences and
> information blocks, and where we end up with a consensus after multiple
> rounds of "fine-tuning" in real-time, because one of the goals for Silver
> is that the requirements be written in "Plain Language". That works when
> there are half-dozen SMEs on a conference call, but how do you scale that
> up? How do Fortune 500 companies, with literally hundreds (if not
> thousands) of content contributors involved in all facets of their web
> presence, consistently meet that goal? And how do internal as well as
> third-party entities test that with enough consistency that evaluation
> results will be (relatively) equal?
>
> WCAG has tried in the past to address this need (SC 3.1.5 Reading Level -
> AAA), but even then, we're told that reading level is not directly
> connected to Plain Language, and even the existing AAA SC we have is
> insufficient. I can accept that explanation, but what then is the solution?
>
> I don't have the answer, but I have some ideas. Glenda's proposal hit the
> meat of those ideas: that we define a methodology for evaluation that can
> be applied at scale. Even that won't be 'perfect', but I believe it would
> get us closer to the real goal (even if it ultimately fails in achieve the
> goal 100%).
>
> Hopefully you will be in attendance during the upcoming F2F meetings at
> CSUN, and I'd love to sit and discuss this further. I believe I want the
> same goals as you do, but how we get there will require a lot of thought,
> discussion and a fair bit of give-and-take along the way.
>
> JF
>
> On Thu, Mar 7, 2019 at 6:25 PM Delisi, Jennie (MNIT) <
> jennie.delisi@state.mn.us> wrote:
>
>> John,
>>
>> First, my apologies if my choice of phrasing sounded antagonistic. That
>> was not the intent. The conversation the COGA members had that day did not
>> include a feeling that people were specifically trying to exclude the needs
>> of the group, or that they were trying to pit one group against the other.
>> My poor choice of words did not properly communicate the issue, and I am
>> sorry. The group had identified several possible future success criteria
>> best addressed through manual testing, and testing that may take a bit of
>> time depending on the content. There had been discussion about other
>> success criteria already in place, but in no way were they upset that those
>> had been included. I will choose my words more carefully in the future.
>>
>>
>> Thank you for your comments about the "time element" and specific
>> testing methodologies for manually testing. I'm hoping more of the COGA
>> members will have time to read these comments in the next day or two, and
>> be able to respond as well.
>>
>>
>> Jennie
>>
>>
>> *Jennie Delisi*
>>
>> Accessibility Analyst | Office of Accessibility
>>
>> *Minnesota IT Services* | *Partners in Performance | *mn.gov/mnit
>>
>> 658 Cedar Street | Saint Paul, MN 55155
>> jennie.delisi@state.mn.us | O: 651-201-1135
>> ------------------------------
>> *From:* John Foliot <john.foliot@deque.com>
>> *Sent:* Thursday, March 7, 2019 4:08:35 PM
>> *To:* Chuck Adams
>> *Cc:* Glenda Sims; Delisi, Jennie (MNIT); Alastair Campbell;
>> lisa.seeman; COGA TF; Silver TF; Andrew Kirkpatrick
>> *Subject:* Re: WCAG 2.2 acceptance criteria
>>
>> So... I *do not* want to be taken out of context here, but a few
>> comments:
>>
>> One example we discussed was the current testing required to ensure that
>> the appropriate alt text is assigned for each image used on a page. 1-2
>> images on a page, not a big deal to test.
>>
>>
>> This is actually more nuanced, because the Success Criteria does not call
>> for "appropriate", it demands "equivalent purpose" which may not always be
>> the same. <ing src="" alt="sunset"> would pass the SC as written, even
>> though most of us instinctively know that the alt text there is "weak" (and
>> further, some might argue, warranting a longer textual description). *But
>> it meets the legal requirement.* Facebook's AI will often provide an alt
>> text "may include two people in front of a car" [sic], which again isn't
>> great alt text, but it *does* meet the minimum bar. (And while I never
>> advocate for just the minimum bar, I am pragmatic enough to realize that
>> sometimes that's the best you're gonna get.)
>>
>>
>> But, on a catalogue page, it could be significant.
>>
>>
>> Maybe, maybe not. I could also envision a code block for that catalog
>> page that looked something like this: <img src="" alt="Photo:
>> %item_name%"> <h3>%item_name%</h3>, where the value of %item_name%
>> would, in both instances, be populated from a data-base. Again, I'm not
>> *advocating* for that, I'm suggesting it is a reasonable solution in some
>> scenarios, and there the test could be as simple as looking at the source
>> code external to the data-base (or manually checking 3 images to ensure the
>> pattern is consistent, and then moving on)
>>
>>
>> The question came down to the concept that there may be manual testing
>> that (at this time) may be the only way to truly ensure a barrier is not
>> met by individuals with cognitive disabilities.
>>
>>
>> Sure, but as the requirements become more sophisticated, a specific
>> testing methodology must also be articulated and put into place: we can't
>> just toss requirements over the wall and hope everyone will figure it out
>> on their own. The Silver TF have discussed this at some length already (and
>> AFAIK not yet come up with a definitive "solution").
>>
>> From a matter of equality standpoint, why would the testing to address
>> the needs for one group be ok if it takes a lot of time, because they got
>> in on the creation of success criteria at the beginning of the process; but
>> for another group who’s needs were addressed more thoroughly later in the
>> development of success criteria, manual testing that may sometimes require
>> some time cannot be considered?
>>
>> Respectfully, I find that something of an antagonistic statement: this is
>> not singling out one group over another, it's about ensuring that what we
>> demand of content creators can be accurately and consistently verified for
>> compliance requirements. I would strongly caution that this discussion not
>> dive into one that pits one user group against the other: we're all here
>> for the same reasons. [*Flagging chairs*]
>>
>>
>> Meanwhile, Glenda wrote:
>>
>> To something like this:
>>
>>    - Be feasibly testable in a "reasonable amount of time" through
>>    automated or manual processes prior to Candidate Recommendation
>>    stage.  Examples include:
>>       - Automated - an automated testing tool exists that quickly and
>>       accurately determines if the criteria is met or not.
>>       - Assisted - a software tool exists that makes it more efficient
>>       for a tester to accurately determines if the criteria is met or
>>       not.
>>       - Manual - a manual process exists that makes it possible for a
>>       tester to accurately determines if the criteria is met or not.
>>
>> note:  "reasonable amount of time" can be determined by a call for
>> consensus.
>>
>> I'd actually leave the time element on the cutting-room floor: a)
>> personally I don't think we'd ever find that magic number, and b) I vaguely
>> recall a SC that speaks about "Timing Adjustable" <grin>, which to
>> paraphrase, effectively states that we shouldn't be locking people into
>> specific time-frames, that they can "adjust" that timing to meet their
>> individual needs. I would think that this would be of particular interest
>> to the Coga TF, as I suspect this is a real issue for many of those in that
>> user-group.
>>
>> I think what is far more important (I'd go as far as "Critical") is that
>> we produce, in conjunction with any SC that requires manual testing, a
>> specific testing methodology - a 'script' as it were - on how to
>> consistently test a component on the page, with clear and unambiguous
>> 'markers' on what is sufficient versus what is not. It's the methodology
>> piece that is critical, not the time it takes to do it (for example, the
>> *only* way to accurately determine if Audio Description is "correct"
>> today is to watch the entire video with Audio Description turned on -
>> whether that video is 3 minutes or 30 minutes or 300 minutes...)
>>
>> Minus the time reference however, +1 to Glenda's suggestion.
>>
>> JF
>>
>>
>> On Thu, Mar 7, 2019 at 3:31 PM Chuck Adams <charles.adams@oracle.com>
>> wrote:
>>
>> +1 Chuck
>>
>>
>>
>> *From:* Glenda Sims <glenda.sims@deque.com>
>> *Sent:* Thursday, March 7, 2019 2:03 PM
>> *To:* Delisi, Jennie (MNIT) <jennie.delisi@state.mn.us>
>> *Cc:* John Foliot <john.foliot@deque.com>; Alastair Campbell <
>> acampbell@nomensa.com>; lisa.seeman@zoho.com; COGA TF <
>> public-cognitive-a11y-tf@w3.org>; Silver TF <public-silver@w3.org>
>> *Subject:* Re: WCAG 2.2 acceptance criteria
>>
>>
>>
>> Goodwitch magically appears after being MIA for weeks to say:
>>
>>
>>
>> I suggest we clarify this bullet a bit more.  I think the example is a
>> useful example, but it isn't the only way to be "feasibly testable".  And
>> the way the sentence is written, it is hard to parse/process.  So what if
>> we changed from this:
>>
>>    - Be feasibly testable through automated or manual processes, i.e.
>>    take a few minutes per page with tools available prior to Candidate
>>    Recommendation stage.
>>
>> To something like this:
>>
>>    - Be feasibly testable in a "reasonable amount of time" through
>>    automated or manual processes prior to Candidate Recommendation stage.
>>    Examples include:
>>
>>
>>    - Automated - an automated testing tool exists that quickly and
>>       accurately determines if the criteria is met or not.
>>       - Assisted - a software tool exists that makes it more efficient
>>       for a tester to accurately determines if the criteria is met or
>>       not.
>>       - Manual - a manual process exists that makes it possible for a
>>       tester to accurately determines if the criteria is met or not.
>>
>> note:  "reasonable amount of time" can be determined by a call for
>> consensus.
>>
>>
>>
>> I'd suggest that if we pursue this "reasonable amount of time"
>> angle...that it be based on "reasonable amount of time" to test an ELEMENT
>> (not a page).  I think the variance in amount of time to test a page (when
>> pages can endlessly scroll) will make it impossible to come up with a
>> "reasonable amount of time" per page.
>>
>>
>>
>> I'm not in favor of leaving the requirement as it is currently drafted at
>> https://www.w3.org/WAI/GL/wiki/WCAG_2.2_Success_criterion_acceptance_requirements
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.w3.org_WAI_GL_wiki_WCAG-5F2.2-5FSuccess-5Fcriterion-5Facceptance-5Frequirements%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3D5ef2UFktuiL-eTBWN_T3f0qUokZodp6f36XFMELbhbU%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518961524&sdata=8PlmTNDzEb55rcFrHJGjD%2FXZip3sOByLDSXl1EfnxLo%3D&reserved=0>
>>
>>
>>
>>
>> G
>>
>>
>>
>> *glenda sims* <glenda.sims@deque.com>, cpacc
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__www.accessibilityassociation.org_certification%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3D9o5lmzpfCNSt1f4YcOrQ9SYxuekoXeU2JQrstNBegME%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518961524&sdata=vDqk6JIcq9%2FPTyU9Qfe5DVmrjtfc6fgmvgbHEV7TpAM%3D&reserved=0>
>>  | team a11y lead | 512.963.3773
>>
>>
>>
>>         deque systems
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__www.deque.com%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DAOB6VeDCie4SvizdBULO7MzT1sYorNafNsRlwX7_YEo%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518971533&sdata=lJGN6NIWJ4PkD0Le1o1Mdnj7ikhAZofBE8P9cxfjXT4%3D&reserved=0>
>>   accessibility for good
>>
>>
>>
>>
>>
>> On Thu, Mar 7, 2019 at 1:31 PM Delisi, Jennie (MNIT) <
>> jennie.delisi@state.mn.us> wrote:
>>
>> Hello,
>>
>> Part of the concerns the COGA group discussed was that manual tests are
>> often required, and the variety of time required to test different pages
>> can vary greatly, depending on the content of that page.
>>
>>
>>
>> One example we discussed was the current testing required to ensure that
>> the appropriate alt text is assigned for each image used on a page. 1-2
>> images on a page, not a big deal to test. But, on a catalogue page, it
>> could be significant.
>>
>> The question came down to the concept that there may be manual testing
>> that (at this time) may be the only way to truly ensure a barrier is not
>> met by individuals with cognitive disabilities.
>>
>>
>>
>> I, too, work in an environment where a lot of testing occurs every day.
>> And, we have to hold contractors, vendors, and employees to standards that
>> can be measured. We need to be able to provide detailed and consistent
>> feedback when a failure of a success criteria has been noted. The time
>> taken to complete testing is definitely important. But, consideration of
>> barriers is the whole goal, right?
>>
>>
>>
>> From a matter of equality standpoint, why would the testing to address
>> the needs for one group be ok if it takes a lot of time, because they got
>> in on the creation of success criteria at the beginning of the process; but
>> for another group who’s needs were addressed more thoroughly later in the
>> development of success criteria, manual testing that may sometimes require
>> some time cannot be considered?
>>
>>
>>
>> I would like to propose that the language about the time it takes to
>> complete a test have an exception process, or propose a rewording of the
>> time component, so that the barriers experienced by this group of
>> individuals with disabilities receives fair consideration in this process.
>>
>>
>>
>> Jennie
>>
>>
>>
>> *Jennie Delisi, MA, CPWA*
>>
>> Accessibility Analyst | Office of Accessibility
>>
>> *Minnesota IT Services* |* Partners in Performance*
>>
>> 658 Cedar Street
>>
>> St. Paul, MN 55155
>>
>> O: 651-201-1135
>>
>> *Information Technology for Minnesota Government* | mn.gov/mnit
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__mn.gov_mnit%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3D_LFhqvlrxsO3lXWJLlfp_27jFLhINYqTNbloiIO7Fuk%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518981538&sdata=c61hAotUeheW845gPRfveq%2BBgQKjKs3Fv9WDJmXoS7Y%3D&reserved=0>
>>
>> [image: Minnesota IT Services Logo]
>>
>> [image: Facebook logo]
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.facebook.com_MN.ITServices%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DYvE-iUe2YSCmKu-16UikG2dQBqGSAzbXuMtLtuLpvt8%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518981538&sdata=GZF9TSKzySF%2BsWlldpBFQZVjWRt%2FUCamMFQjOwTmj7Y%3D&reserved=0>[image:
>> LinkedIn logo]
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.linkedin.com_company_mn-2Dit-2Dservices%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DRK5dztyDuf9FrOqeASJlnJCg223LbPm8cHpkQHow7co%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518991547&sdata=YLdbJw5aOUXZ0PrSxtRw9w6s%2FpO30vIdEVmJFCCfj1Y%3D&reserved=0>[image:
>> Twitter logo]
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_mnit-5Fservices%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DrcsHJ6yQ3mOK0YLis1zBI7MepoVIPPbUJP_8TCpirLs%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933518991547&sdata=QOW%2BmVKvlIIfIV0lP0gFmuJa3Ng9x1NLVrOOkdUIVxo%3D&reserved=0>
>>
>>
>>
>>
>>
>>
>>
>> *From:* John Foliot <john.foliot@deque.com>
>> *Sent:* Thursday, March 7, 2019 11:26 AM
>> *To:* Alastair Campbell <acampbell@nomensa.com>
>> *Cc:* lisa.seeman@zoho.com; Delisi, Jennie (MNIT) <
>> jennie.delisi@state.mn.us>; COGA TF <public-cognitive-a11y-tf@w3.org>;
>> Silver TF <public-silver@w3.org>
>> *Subject:* Re: WCAG 2.2 acceptance criteria
>>
>>
>>
>> Hi All,
>>
>>
>>
>> To perhaps also put a finer distinction on it... W3C Process mandates two
>> independent implementations of whatever new technology is being proposed -
>> a testing activity we actually did last spring during CSUN for the 2.1
>> Success Criteria (where, for SC 1.3.6 @ AAA we actually used the
>> implementations that Lisa had pointed us to). Those implementations may or
>> may not also serve as a 'testing tool', but as the Silver discussion
>> continues, a repeatable testing methodology will need to surface for each
>> new requirement, whether that is via a tool (mechanical tests - see: ACT
>> TF), or via a 'cognitive walk-though' or similar methodology (a process
>> still to be fully defined in Silver).
>>
>>
>>
>> At the end of the day, while it is true that our primary audience is and
>> will always be users with disabilities (of all stripes and forms), a second
>> important consideration is compliance requirements mandated by legislation.
>> To clear that hurdle, we will need to ensure that both implementers and
>> consumers have a baseline measurable & impartial (non-subjective) "test",
>> so that entities can then claim conformance based upon the outcome of said
>> test.
>>
>>
>>
>> JF
>>
>>
>>
>> On Thu, Mar 7, 2019 at 10:52 AM Alastair Campbell <acampbell@nomensa.com>
>> wrote:
>>
>> Hi Lisa,
>>
>>
>>
>> > To meet new user needs we may need new tools and reviews may need to
>> acquire new skills and knowledge.
>>
>>
>>
>> Which is fine, perhaps we can clarify that it means available at the time
>> of publication?
>>
>>
>>
>> New tools, especially if they “take a day” from a programmer would need
>> to be available at the time of publication, for the reasons I outlined in
>> the last email.
>>
>>
>>
>>
>>
>> > Also new tools will come as soon as we know a SC will be accepted. in
>> other word at CR. With WCAGs current history it will not come before then.
>>
>>
>>
>> Can you point to a previous example? I.e. where a tool that didn’t exist
>> was required to meet an SC wasn’t available until after CR?
>>
>> The closest I can think of is ARIA in WCAG 2.0, but it wasn’t actually
>> required to meet the SCs.
>>
>>
>>
>> It is very difficult to deal something in CR which then has to be pulled
>> because no one has created a tool, the whole timeline goes back a step. The
>> way the W3C prefers to work is to have working prototypes/code created
>> prior to specs. This has been a hard-learned approach [1].
>>
>>
>>
>> I suggest that if an SC needs a tool, we work up the SC template and go
>> through the initial process. That could be accepted on the condition that a
>> tool will be available. If it does not become available then the SC will be
>> removed before CR.
>>
>>
>>
>> It would also help to put those SC(s) first so people have more time to
>> work on the tools, I’ll make a note of that.
>>
>>
>>
>> Cheers,
>>
>>
>>
>> -Alastair
>>
>>
>>
>>
>>
>> 1] Accessibility example for what should be a ‘simple’ thing, the naming
>> algorithm.
>>
>>
>> https://www.linkedin.com/pulse/future-accname-spec-planning-strategy-functional-using-garaventa/
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__gcc01.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fwww.linkedin.com-252Fpulse-252Ffuture-2Daccname-2Dspec-2Dplanning-2Dstrategy-2Dfunctional-2Dusing-2Dgaraventa-252F-26data-3D02-257C01-257Cjennie.delisi-2540state.mn.us-257C2a94ca2523bb46a0bdd208d6a321fd94-257Ceb14b04624c445198f26b89c2159828c-257C0-257C0-257C636875763714627907-26sdata-3DVCYgFIjR5CMjFxunizRlfRp8QYNbGpWZR8Sb6OmhcQI-253D-26reserved-3D0%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DfQvWalvG5VAStfZ7Pmso1gyaTqOJ_sivl3M1isFCcBU%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933519001548&sdata=qdCq8Qf%2FC76IcV%2FlXsfr0R8C1I3fbkUtdZcwpq4iD30%3D&reserved=0>
>>
>>
>>
>>
>> --
>>
>> *​**John Foliot* | Principal Accessibility Strategist | W3C AC
>> Representative
>> Deque Systems - Accessibility for Good
>> deque.com
>> <https://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__gcc01.safelinks.protection.outlook.com_-3Furl-3Dhttp-253A-252F-252Fdeque.com-252F-26data-3D02-257C01-257Cjennie.delisi-2540state.mn.us-257C2a94ca2523bb46a0bdd208d6a321fd94-257Ceb14b04624c445198f26b89c2159828c-257C0-257C0-257C636875763714637908-26sdata-3DGG0O3iMQp-252F8PHf6p8EWzegAcg-252FBpQuuSttIJLwi6EbA-253D-26reserved-3D0%26d%3DDwMFaQ%26c%3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE%26r%3Db-9TIC95K-nLEKIDibNXAN_FKV-iXhLlAW2Zc3ebV_c%26m%3D_p7Yxrl6Zp2sk72o5dwgmlwhjQ4eLo4fUWPVLEoAbk0%26s%3DHvm23HBHHOyFc84ojLoyEOH_dH9_6VudfqT7rEX1dGM%26e%3D&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933519011562&sdata=aYb06tz28T18LPmPZZCltAsSWbqyiwL%2BKZbXeeZKkgU%3D&reserved=0>
>>
>>
>>
>>
>>
>> --
>> *​John Foliot* | Principal Accessibility Strategist | W3C AC
>> Representative
>> Deque Systems - Accessibility for Good
>> deque.com
>> <https://gcc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdeque.com%2F&data=02%7C01%7Cjennie.delisi%40state.mn.us%7Ceb313779e8eb41e52f7608d6a34985ba%7Ceb14b04624c445198f26b89c2159828c%7C0%7C0%7C636875933519011562&sdata=vm7joHaJtLwdQX8bz6RQYy2Zh2DVjwKrotasZaJQvPw%3D&reserved=0>
>>
>>
>
> --
> *​John Foliot* | Principal Accessibility Strategist | W3C AC
> Representative
> Deque Systems - Accessibility for Good
> deque.com
>
>

-- 
*​John Foliot* | Principal Accessibility Strategist | W3C AC Representative
Deque Systems - Accessibility for Good
deque.com

Received on Friday, 8 March 2019 01:24:55 UTC