Efficient auditing - small sites - no sampling

Dear Alistair,

I explained our current methodology in detail earlier this year, but for a small site such as a project or other “info” site the basic process is -

A) Automatic validation of HTML & CSS (this does code, doctype and lang)
B) Check Site Map, Home page and a couple of normal pages code visually to check for consistency, that code is efficient, tables are not used for layout, scripts are not used for function.
C) For each and every content page 
     1) list and check relevance of headings, image text and links
     2) skim read content for relevance, spelling, instructions, clearly defined links
     3) check browser title is relevant 
     4) check that pages transform when zoomed
     5) note any pages that contain additional elements such as data tables, forms or animations
     6) check colour contrast on one page and any others that have different design/colours
D) Skip through site using keyboard only. (particularly check focus is visible, rollovers work and progress is logical)
E) Return to any pages identified in C) 5. and check relevant issues (usually just a form)
F) Skip through the site with a screen reader
G) Review score sheet and confirm that anything not scored is Not Applicable.

All done! (assuming no major problems identified)

If there are any problems then it takes longer as we have to make relevant notes, copy and paste any code, take screen shots etc. But if it is straight forward then, once up and running,  most pages in step C) take about a minute to check.

Once the validation is done then we have to write up the report, that takes a bit longer !!

Richard


From: Alistair Garrison 
Sent: Tuesday, June 26, 2012 6:26 PM
To: RichardWarren ; Eval TF 
Subject: Re: All pages

Hi Richard,  

I really don't think that the sampling method we have been working on for months would have taken a half day to set up - the small site you mentioned only consists of a small number of very similar pages.

But, what tools do you use to conduct a WCAG 2.0 AA audit at a rate of 2mins per page - wow!

Alistair 

On 26 Jun 2012, at 18:04, RichardWarren wrote:


  I am not sure if I am missing something here as I fail to understand the problem everyone seems to have.

  We often get asked to validate quite small sites. For example, a few weeks ago I did an EU project site (www.tepsie.eu) with 28 pages. By working through each page I was able to score the whole site in less than an hour. If I had used a sampling technique as you suggest it would take me at least half a day just to set up and record the samples!

  On the other hand we have just finished the FT website (www.ft.com) where sampling was essential as it has just over 32,000 pages.

  It is all a matter of horses for courses

  Richard

  From: Kathleen Wahlbin
  Sent: Tuesday, June 26, 2012 1:38 PM
  To: 'Shadi Abou-Zahra' ; 'Alistair Garrison'
  Cc: 'Eval TF'
  Subject: RE: All pages

  Hi -
   
  During our conversations early on, we talked about using a sampling method to review a set of pages on the site and then automated tools to check the full site.  Is the recommendation now to check all pages manually? 
   
  If we are suggesting checking all pages manually, then I think we need to be careful about what this means for different types of websites/web applications.   Here are two situations (and I am sure there are many more that we could come up with):
   
  -        For applications, a page could have many different variations depending on the data or options selected.  Do all of these different variations need to be checked?
   
  -        For database driven websites, there may be a lot of different pages but they may all use the same template and the data or content of the page may be same.  In this case, if the content is added to the page in the same way, then an evaluator should be able to test just one of these pages rather than the full set.
   
  Kathy
   
  -----Original Message-----
  From: Shadi Abou-Zahra [mailto:shadi@w3.org] 
  Sent: Tuesday, June 26, 2012 6:21 AM
  To: Alistair Garrison
  Cc: Eval TF
  Subject: Re: All pages
   
  Hi Alistair,
   
  I'm not sure what you mean by overkill. Maybe it is not economically feasible to check all pages but ideally all pages are checked before making an accessibility statement about them (especially for small websites where the use of templates, content management systems, and other quality assurance procedures are often less sophisticated).
   
  We should bear in mind that no matter how robust a sampling method is, there is always a possibility that an evaluator misses critical parts of the website through sampling. Sampling is always an approximation but ideally it is "close enough" to reality except for few edge cases.
   
  What is the problem with saying something like "if you can check all the pages then please go ahead and ignore the sampling section"?
   
  Sidenote: A production line already assumes large amounts of products that are produced in the same way, so that sampling becomes effective for quality assurance. However, not all websites are developed this way, and most websites actually resemble a handicraft store... ;)
   
  Regards,
     Shadi
   
   
  On 26.6.2012 11:56, Alistair Garrison wrote:
  > Hi Shadi, Richard,
  >
  > Re-reading your emails you seem to be looking at conformance evaluation from a new viewpoint (maybe rightly / maybe wrongly?) from the one we seem to have adopted to date.
  >
  > Viewpoint 1 (to date) is an evaluator wishing to see if a whole website conforms with WCAG 2.0.
  >
  > With regard to this viewpoint an evaluator doesn't need to check all pages, simply enough to confirm whether the website conforms or doesn't.  If there are several examples of content which pass a checkpoint and the same team has built all the site, I suppose you can assume that the other relevant instances of content will also be ok, and visa versa for content with issues.  You would not think about checking all items on a production line, would you?
  > Historically most people seem to think this type of evaluation is efficiently and effectively achieved by sampling, which is reflected in their different methodologies.  Checking all pages could be seen as unnecessary 'overkill' even on small sites...
  >
  > Viewpoint 2 (new) is a website owner wanting to find and correct all faults in their website - in which case they would want to look at all pages no matter what the size.
  >
  > Either way, treating the concept as editorial only sounds to me a bit casual, but I'd be interested to hear other views...
  >
  > All the best
  >
  > Alistair
  >
  > On 26 Jun 2012, at 11:12, Shadi Abou-Zahra wrote:
  >
  >> I agree with this approach too. The default (and ideal) would be to check all pages. In cases where this is not practically feasible we provide a robust sampling procedure.
  >>
  >> This probably affects several sections, including the introduction, though rather editorially only.
  >>
  >> Regards,
  >>   Shadi
  >>
  >>
  >> On 25.6.2012 19:47, RichardWarren wrote:
  >>> Michael,
  >>> We are not suggesting "all or nothing" .
  >>> We are saying that the preferred method is to validate all pages,
  >>> but if this is too large a task (which for typically large sites it
  >>> will be) then here is a sampling procedure which will ensure that
  >>> all important elements are covered.
  >>>
  >>> Thus owners of small sites that want to check their compliance can
  >>> skip the sampling process and get straight on with the method of
  >>> validating their site.
  >>>
  >>> Richard
  >>>
  >>> -----Original Message----- From: Michael S Elledge
  >>> Sent: Monday, June 25, 2012 3:34 PM
  >>> To: public-wai-evaltf@w3.org
  >>> Cc: Alistair Garrison ; RichardWarren ; Eval TF
  >>> Subject: Re: All pages
  >>>
  >>> Hi All--
  >>>
  >>> I agree with Alistair. We nearly always test a sample of pages in a
  >>> website. Although it would be ideal to test every page in a site, it
  >>> is impractical because of time and cost, especially if it is
  >>> performed manually. Many people reading our methodology will be
  >>> looking to apply it to their reviews, which out of necessity will be based on sampling.
  >>> The alternative, relying solely on automated checkers to review a
  >>> medium to large site in its entirety, I think we can all agree is
  >>> not a viable alternative, even with their improvements.
  >>>
  >>> We spent a significant amount of time describing sampling approaches
  >>> early in this process, so I'm surprised that the "all or nothing"
  >>> approach is still being debated. I may have missed something along
  >>> the way, however, so please forgive me if I did.
  >>>
  >>> Best regards,
  >>>
  >>> Mike
  >>>
  >>> On 6/25/2012 3:08 AM, Alistair Garrison wrote:
  >>>> Hi Richard,
  >>>>
  >>>> Reading the archive I see we have talked around the subject of
  >>>> sampling - but not actually whether to evaluate all pages instead
  >>>> of a sample. Reading a number of emails, however, it becomes clear
  >>>> that we all seem to use some kind of sampling effort - hence its
  >>>> seemingly automatic acceptance to this point.
  >>>>
  >>>> To my mind, there are many reasons for adopting our reasonably
  >>>> straight-forward sample-based approach (again we have all mostly
  >>>> done something similar for years), even for smaller sites, over
  >>>> evaluating all pages.  I suppose its lower cost in terms of time /
  >>>> effort - with the same actual benefits is one of the top reasons for sampling.
  >>>>
  >>>> I'm also worried that the changes you suggest (did it also need a
  >>>> change to the Requirements docs) at this stage will create a
  >>>> two-tier (all or sample) approach, forking our current work and
  >>>> possibly opening a big can of worms (like how do you realistically,
  >>>> and with very high confidence, find all pages in a website, what
  >>>> exactly is a small or medium site, etc...).
  >>>>
  >>>> I remain to be convinced, but I would be interested to hear the
  >>>> views of others.
  >>>>
  >>>> All the best
  >>>>
  >>>> Alistair
  >>>>
  >>>> On 22 Jun 2012, at 12:05, RichardWarren wrote:
  >>>>
  >>>>> Reason for making the default position to include all pages
  >>>>> (entire
  >>>>> website)
  >>>>>
  >>>>> 1) Taking the Internet (WWW) as a whole, the majority of sites are
  >>>>> quite small (100 or so pages), typically things like "Mum&  Pop"
  >>>>> stores, SME profiles, personal or project websites.
  >>>>>
  >>>>> 2) Where this is practical a full evaluation is more reliable than
  >>>>> a sample.
  >>>>>
  >>>>> 3) Our brief is to deliver an evaluation methodology, not a
  >>>>> sampling methodology.
  >>>>>
  >>>>> 4) Reliable sampling is a complex procedure, if owners of
  >>>>> small/medium sites think they have to go through sampling they
  >>>>> will give up.
  >>>>>
  >>>>> 5) Sampling procedure will only be required for large sites so it
  >>>>> should be an option. The default should be to evaluate the whole
  >>>>> site. If the evaluator feels that is too large a task then s/he
  >>>>> should have the option to use a sampling procedure to help manage
  >>>>> the evaluation work load.
  >>>>>
  >>>>> My feeling as that we need to change the order of our text so that
  >>>>> sampling is offered as the option, not the full audit.
  >>>>>
  >>>>> Richard
  >>>>>
  >>>>> -----Original Message----- From: Alistair Garrison
  >>>>> Sent: Friday, June 22, 2012 10:39 AM
  >>>>> To: RichardWarren ; Eval TF
  >>>>> Subject: All pages
  >>>>>
  >>>>> Hi Richard,
  >>>>>
  >>>>> We were not able to debate the agenda item relating to "testing
  >>>>> all pages"? Can you just remind me what was behind this issue?
  >>>>>
  >>>>> All the best
  >>>>>
  >>>>> Alistair
  >>>>>
  >>>>
  >>>>
  >>>
  >>
  >> --
  >> Shadi Abou-Zahra - http://www.w3.org/People/shadi/ Activity Lead,
  >> W3C/WAI International Program Office Evaluation and Repair Tools
  >> Working Group (ERT WG) Research and Development Working Group (RDWG)
  >>
  >>
  >>
  >
  >
   
  --
  Shadi Abou-Zahra - http://www.w3.org/People/shadi/ Activity Lead, W3C/WAI International Program Office Evaluation and Repair Tools Working Group (ERT WG) Research and Development Working Group (RDWG)
   
   
   

Received on Tuesday, 26 June 2012 19:35:28 UTC