Outcome of survey - comments on use of Methodology

Dear EvalTF,

We have now received 10 surveys (9 full) for the open and closed evaluation. The results are not enough to compare the results for the success criteria but most evaluators did take the opportunity to provide information about how they experienced the practical use of the methodology. Below are the comments. I have also tried to do a first summary of the comments.

This week, we will map the comments to the DoC and to the discussions we have already had in our Telco's and on the list.

# Q16: Accessibility Support Baseline

Multiple evaluators indicated that they find this section unclear. They ask more details about what to list here.  We have already discussed this in our telco and concluded that we want to try a (short) list of questions that can be used to define the accessibility support baseline. Comments from the survey:

• Not clear: What means accessibility supported? / Not clear: What is a accessibility Support Baseline? What should I list? / I can imagine a "accessibility Supported Baseline" with regard to closed networks (use of special browsers and assistive technologies). / But if the baseline is supposed to be broader? Should i list all the browsers, assistive technologies (browser-screenreader-combination..) user agents, that should work well with the website? I think this is not really feasible for an evaluator.

# Q20: Please comment on how easy it was to follow the
  guidance for Step / 2: Explore the target website.

Most evaluators indicated that this section was clear to them. One comment is related to the note about overlap in 3c and 4a. We have discussed possible solutions during the telco’s of EvalTF. Comments from the survey:

• Easy (x3) / It helped to do it in steps (common pages, functionality,...). It was not difficult on this site because it is not very complex / Step 2 was very easy to follow, especially because the 4x website was template-based and almost all of the webpages were very similar. The example lists of different webpage types and different types of functionality were helpful for determining which pages should be included for the audit. / In order to decide how important (to the function of the site) individual pages were I needed to establish what 4x was and what it wanted this site to do. Once that was established the site made sense and listing the common web pages, functionality and types of page was quite easy

• The only aspect that caused slight confusion was Step 2C: Identifying Other Relevant Webpages because the examples listed pages that may already be included from Step 2A or Step 2B since they contain key functionality. An example of this would be a contact page that has a form and is also the only page with a form, so it would included for key functionality and not as an “other” page. Since the pages are not actually segregated by type, it does not particularly matter, but the overlap may cause some confusion.

# Q21: Other comments on Step 2: Explore the target website

Again the remark about overlap that we plan to address and some additional comments / methods from the evaluators. One evaluator wrote “I may be an old granddad - but I found it essential to understand the purpose of the site and the organisation behind it in order to be confident that I had identified all the important bits!!”. This should be covered in the first step, but could be more clarified. Some comments are covered in the methodology already but in earlier or later steps. Comments from the survey:

• As "relied upon" mean: "the content would not conform if that technology is turned off or is not supported", I think that CSS shouldn't be an example in the Methodology Requirement 2.d explanation, because a website should work without css to be accessible.

• I would typically build up (and possibly modify when I am progressing) the sample while exploring the site.

• When you say (in Comm n Webpages dfn) : "similar web pages that are typically linked fro m all other web pages / (usually from the header, footer, or navigation menu of a web page" do you mean all pages in the main menu (in / this case: Home, Events, News, Programs, Counties, Awards/Scholarships, About, Join, Volunteer, Explore, / Support) ? or just the 'typically' pages linked from every page. Also : what is 'typically'/ 'usually'?

• The inclusion of common pages is can be helpful, but it should not be mandatory to include all common pages. If common pages are all built from the same template and have most of or all of the same functionalities, they may not offer any benefits in the audit when there are potentially other non-common pages with different functionalities. Weighing how many common pages should be included should be up to the evaluator and should be explicitly stated.

# Q34: Please comment on how easy it was to follow the
  guidance for Step / 3: Select a Representative Sample.

We received a number of comments on this section. Some people indicated that they had no problems with this section and some commented on the order of the sections. We have already discussed this in the telco and will work on a solution in the next Editor Draft. Also, again, some comments related to. Other comments:

• It was fairly easy. / I think it is easy if step 2 is done thoroughly. / It is easy to identify the different types of pages (and states) but I don't know how many "common" pages I should select. / Step 3 was easy to follow because it states to take all pages identified in Step 2 and to include them all when creating a formal sampling strategy for the target website. The notes included with Step 3, such as Size of the website, Age of the website, etc., were very helpful when determining the scope of the sampling.

• The steps were quite logical except for the distinction between exemplar pages and specific elements. See other comments

• The letters of Step3 do not correspond with the letters of part 2. It was somewhat of a puzzle. / If you took notes during step 2, this is easy. Adding the pages from 2e before adding pages from 2b,c,d was a bit / confusing.

• PDF came up accidentally - could have selected a PDF on purpose but dod not intend to. The random procedure forced me to think whether PDF should be part of the sample (even though some success criteria are not really applicable to PDF files)

• It is sometimes tricky to find all representative pages: for example complex images, tables, videos, if the number of pages in the website containing this kind of element is small. / We usually ask the client (commissionner) to provide a list of pages containing some specific types of contents, which doesn't prevent us from looking ourself too.

• In step 2e WCAG-EM asks for any special pages for disabled users to be identified. These are not recorded in Step 3. The survey here asks for special pages when I think it means examplar pages.

• "include all common pages"... - I do a sample of common pages? / Where is the difference between step 2 and 3? Step 2 means to identify the pages, step 3 select pages?

• In my opinion a minimum random sample size of 5 web pages is to high. The cost for evaluating small websites gets too high! / We found Step 3 to be more harmful than helpful when creating the sampling strategy for the 4x  website for several reasons. The pages we received from the random sampling had absolutely no benefit when conducting the audit – they revealed no new information or findings regarding the accessibility of the website. Given that random sampling accounted for half of our final page list, half of the audit felt like it was completely unnecessary. / Additionally, including all of the common pages and other relevant pages in the audit is usually not necessary and is likely a waste of resources and time. In the case of the 4x website, auditing all the common pages would most likely yield no additional findings compared to the exemplary pages. Common pages should be identified as in Step 2 to understand site structure, but their inclusion in the audit should be left up to the evaluator – currently the methodology states that all common pages listed in Step 2 should be included, which is not needed for the 4x website. /

# Q244: Provide any suggestions you might have to further
  improve / WCAG-EM

We could reconsider to combine section 2 and 3. This was discussed in the telco’s and during the Face to face in San Diego. We also see the reappearance of the discussion about the level of detail of section 4. In our telco’s we already addressed that. Other comments in the survey:

• I think another look at the way the document is structured is needed. Perhaps it isn't the easiest way to make a selection of pages when you have to do this in 3 different steps. I think it would be easier if step 2a and 3a are placed right after each other etc. I feel like this way of evaluating a website is some kind of loop which never ends. Step 5 is confusing, is this the document that is handed over to the client or is it a document for the evaluator.

• I opted to do minimal reporting, if I found a failure, I did not report on any further pages for that SC, as per the note in 5a

• WCAG-EM contains nothing on the actual SC evaluation and rating approach which is the time-consuming thing. / Several SC especially 1.1.1 and 1.3.1 but also 3.3.2 benefit from being broken down into a number of checkpoints. WCAG EM could introduce this level of granuarity but doesn't. / There is no indicatino how to draw the line for content that is less than perfect. Pass or Fail? Are incidental errors or editorial oversights allowed to slip through? / For checking some criteria like headings even the WCAG techniques do not prtovide sufficient guidance for deciding whether to call somethign pass or fail. / Own testing approaches can be referenced but it is not clear whether that means the decisibno on SC ratign can then be safely based on that procedure or whether it is assumed that the judgement is carried out just on the level of SC without reference to any more precvise twechniques (a notion that we find hard to substantiate but thsat is repeatedly brought forward by the stalwards of a 'pure' WCAG Test thar eschews reference to any particular technique, even Failure as guiding principle of judgement. We have said moe than once that this appears to be ivory tower thinking and in all practical cases explaining a judgemetnh would evolve enumeration content instances and explaining why this or that instance has been considered a reason to fail a SC. Just not contemplating that means creatign a smoke screen of idealised judgement that cannot be questioned or counter-checked with concrete (documented, operationalized) procedures.

• The separate evaluation of processes and other pages feels unnecessary, in evaluating process pages are treated the same as 'regular' pages

• I'm sorry, due to lack of time i did not manage to fill in the whole quessionnaire. / I can't imagine how evaluators decide for "pass" or "fail". If there is a single problem it is "fail"? If not, how should I evaluate.

• While we are very confident in the accuracy of the results of the evaluation for the pages we reviewed, we rated our confidence in the ‘quality’ of the results as Low because of the set of pages we selected using the methodology, i.e., including all of the ‘common’ pages and ‘random’ pages. The methodology should not require a random sampling because more often than not, the random sampling will not provide any new or additional information compared to a purposive sampling. If random sampling is included, it should be very limited to about 10-20% of the purposive sampling…IE, a 10 page review includes 1 random page. If there is random sampling, the method of acquiring this sampling needs to be explicitly defined, even if there are a few options. In our case, we used someone unfamiliar with the website and web accessibility. / All common pages should not be mandatory when doing an audit. This should be left to the discretion of the evaluator because there may be pages better suited for an accessibility audit. Because WCAG-EM does not specify how many pages should be in an audit (which should be specified – we usually do 5, 10, 15, or 20 depending on the project), including all the common pages may artificially increase the number of pages to an exhausting level with very little return on investment.


This concludes the comments on the use of the sections of WCAG-EM. We will make a link to the comments in the DoC early next week and Shadi and I will propose new text for some sections in a survey.

Kindest regards,

Eric Velleman

Received on Saturday, 26 April 2014 14:13:57 UTC