- From: David Fazio <dfazio@helixopp.com>
- Date: Mon, 11 Mar 2019 22:34:39 +0000
- To: Alastair Campbell <acampbell@nomensa.com>
- CC: "lisa.seeman@zoho.com" <lisa.seeman@zoho.com>, public-cognitive-a11y-tf <public-cognitive-a11y-tf@w3.org>
- Message-ID: <B37643A2-3E76-4545-988D-66AF77C0AE57@helixopp.com>
Any high school English teacher worth their salt can scan hundreds of pages of essays riddled with errors, including incorrect formatting of spaces, in no time flat. Being a writer, myself, even though I am blind in the left half of each eye, and have completely screwed up depth perception and spatial reason, as a result, I can even spot spacing errors relatively easily. I think all this worrying is much ado about nothing. Build it and they will come. Require it and it will be done. Btw most neuro-cognitive issues have to do with visual and or audio complexity not necessarily verbiage, or word choices. That’s my 2 cents. - Fazio This message was Sent from my iPhone. Please excuse any typographic errors. On Mar 11, 2019, at 3:11 PM, Alastair Campbell <acampbell@nomensa.com<mailto:acampbell@nomensa.com>> wrote: Hi Lisa, I think the main thing is the change in process below, that should address the worry about there being a chance to develop tools. > I am strongly against requiring tools to go to CR. Having the algorithm etc should be enough. It didn’t say a tool was required (most checks do not need a tool), but that if a test requires a tool, that is available. > it is not reasonable to expect people to invest in making tools again before we even get to CR considering the group will probably pull everything out in the CR stage anyway The other side of that is that, IF a check requires a tool (e.g. some kind of complex language assessment), it is unreasonable to bring in a requirement that people have to test when there is no means of testing it. Also, the process being proposed is not going to be the same as 2.1, (paraphrasing): * Small cross-TF groups create the SC templates, with description, test procedure etc. * 2-4 of those go for review to the AGWG at a time, working in a 2 week sprint model. * Approved SCs go into the editors draft. * Rinse and repeat until we get to wide review deadline (or run out of SCs, but that’s unlikely). * There is a 2 month gap (minimum) between wide review (where everything has been approved by the group) and CR. (Rough spreadsheet of timeline<https://docs.google.com/spreadsheets/d/1cK6iDM5QzwyGQK-3L4RBFK7dPGwdPRybqIJMvtOvMSo/edit#gid=0>.) That will avoid SCs being pulled out when we get to CR due to not being approved. They will have been approved before going into the editors draft. If we put SCs which require a tool at the top of the queue, there will be months between that SC being approved and CR. I’m suggesting that anything that needs a tool for testing would be marked as such in the wide review and removed prior to CR if that tool is not available. > If a tool could reasonably be built in a few days of programming time, and in the meantime it can be tested by hand even if that is slower) that should be enough. If it can be tested manually but slower, I wouldn’t call that a ‘required tool’. However, something algorithmic checking spacing ratios or language could then take hours per page. If that takes a few days of programming time, there is a minimum 2-month window between the group approving the SC and the CR deadline. Cheers, -Alastair
Received on Monday, 11 March 2019 22:35:05 UTC