- From: Hugh Sasse Staff Elec Eng <hgs@dmu.ac.uk>
- Date: Thu, 4 Jan 2001 10:40:41 +0000 (GMT)
- To: Charles McCathieNevile <charles@w3.org>
- cc: Dave J Woolley <david.woolley@bts.co.uk>, "'www-amaya@w3.org'" <www-amaya@w3.org>
On Thu, 4 Jan 2001, Charles McCathieNevile wrote: > In fact a large number of RSACi ratings are not 0,0,0,0. Likewise, even a > commercially biased claim is more useful than no claim at all, and since it > would become trivially easy to write a counter-claim (using, for example, a Not trivial to defend it in court. > version of teh Web Acccessibility Report Tool that generated such information [...] > be signed, and corporations are also interested in minimising the number of > people who say "the claims of company X are routinely untrustworthy" in a way > that show up in search engines... The bigger corporations seem to be able to ride out adverse publicity, and "Our publicity machine is bigger than yours" works for them. > > In fact the primary goal as I see it is for editing software such as Amaya to > track the accessibility status of content being worked on, and only ask This is much more useful, I think: people need to have software that will increase the accessibility of content, and the definition of accessibility needs to be public so it can be seen, criticised, and enhanced. However... > authors to fix things that need fixing. (As well as having a way of recording > information if a Human tested something difficult to test by machine, that > persists when a new tool is used to work on the content.) ...This last bit I'm not sure about. If new work is done on the content, then you have to re-test accessibility again in case it has been broken. Or is this for regression testing, so that the site changes can be compared to previous tests? > > Cheers > > Charles McCN > Hugh hgs@dmu.ac.uk
Received on Thursday, 4 January 2001 05:45:21 UTC