- From: Phill Jenkins <pjenkins@us.ibm.com>
- Date: Thu, 5 May 2016 13:02:22 -0500
- To: Alastair Campbell <acampbell@nomensa.com>
- Cc: GLWAI Guidelines WG org <w3c-wai-gl@w3.org>, "IG - WAI Interest Group List list" <w3c-wai-ig@w3.org>
- Message-Id: <201605051802.u45I2SJJ001172@d01av04.pok.ibm.com>
A couple tweaks to your suggestion Warning = common technique that should trigger a further check because in many contexts it would not meet the SC. So, this implies that the "warning submitter" would need to include why the technique is common, and some examples of context (e.g. many but not all) that are common and would fail to meet the SC. we have to choose our words and terms wisely to make science of meeting the SC as clear as possible, and still remain technology neutral, content specific (not device, browser, AT or user setting), and implemental by authoring tools (i.e. harmonize with UAAG and ATAG). ___________ Regards, Phill Jenkins, From: Alastair Campbell <acampbell@nomensa.com> Date: 05/05/2016 12:14 PM Subject: Re: warning category for techniques / failures. Thanks, How about this? Warning = something that should trigger a further check because in many contexts it would not meet the SC. I wouldn?t rule out automatic checks, e.g. The search field without a label could be automatically detected as having a hidden label. Cheers, -Alastair From: "Ta, Duc" <duc.ta.740@my.csun.edu> Date: Thursday, 5 May 2016 at 17:20 To: Gregg Vanderheiden RTF <gregg@raisingthefloor.org> Cc: Alastair Campbell <acampbell@nomensa.com>, GLWAI Guidelines WG org < w3c-wai-gl@w3.org>, IG - WAI Interest Group List list <w3c-wai-ig@w3.org> Subject: Re: warning category for techniques / failures. I'm agree with that. I think warning should be something that needs to check and verify manually to know whether the page actually fails that checkpoint or not. On Thu, May 5, 2016 at 11:55 AM, Gregg Vanderheiden RTF < gregg@raisingthefloor.org> wrote: thanks I can see the value of warnings. I just don?t think you should say they are common ways that things don?t pass (which means ?common failures? because not passing means failure). because that become "Common failures that don?t automatically fail? maybe something like Warning = something that needs to be manually checked because conformance changes for different contexts. or some such. tough to figure out how to say it gregg On May 4, 2016, at 3:16 AM, Alastair Campbell <acampbell@nomensa.com> wrote: Gregg wrote: "Do not understand: 3. [New] Warnings (common ways that pages don?t pass, but don?t automatically fail.) What does this mean? Hi Gregg, It is trying to say that: If your page does X, it probably fails. We are not 100% sure it fails, you might have passed some other way, but you?d better check. There are probably more things we can document under a ?warnings? category than failures, as they don?t? have to be 100% failures in all circumstances. I?m sure some of the testers on the list could come up with many examples. I?ll do a starter for 10 to give some examples: - Data table doesn?t have a visible caption. - No visible label for a form field. - Related fields are not grouped with a fields & legend - Main heading is not an H1 - Submit button isn?t at the bottom of the form. - Icon doesn?t have supporting text. - Use of 'click here' / 'read more?. None of these are definitely failures, but the presence of them on a page rings warning bells! Many automated tools have a ?warning? category for things they pick up but cannot be sure are failures. Obviously we could come up with millions of these, so it should be ?common? ones rather than all. We could even ask a testing tool person to see if they have any aggregate stats on these. Kind regards, -Alastair
Received on Thursday, 5 May 2016 18:03:03 UTC