Questions to be asking ourselves if/when we use our draft methodology to evaluate some websites

Hi gang,

Yesterday on the EvalTF call we started discussing the idea of using the 
time at TPAC to put our methodology through a trial run by applying it 
to some websites.

I really like this idea.  It was incredibly helpful when a number of 
industry members did this during the TEITAC process, applying drafts of 
the TEITAC report/regulations we were considering to a number of 
real-life products.

I think, however, if we do this we need to be very careful and 
thoughtful in how we approach it.

First, I think we need to make very clear to everyone involved & 
everyone we publish the results to that we are testing OUR draft 
evaluation methodology, and NOT the websites we are looking at.  The 
point is to figure out how well our methodology works, and NOT to 
criticize/critique websites.

Second, I think we need to have a set of questions in mind about the 
document - things in the draft that we are particularly looking at and 
evaluating.

Third, I think we need to have a sufficiently broad range of sites, to 
better evaluate our work in the "real world" diversity out there.

To that third end, I think the kinds of sites we should be testing include:

 1. A relatively homogenous site (e.g. an on-line newspaper where most
    pages look largely the same)
 2. A very heterogeneous site (with a mixture of page styles, content
    types, etc.)
 3. A site where much of the content comes from elsewhere (3rd party
    content where the site owner has little/no control over the 3rd
    party content)
 4. Web applications, including specifically single-page web apps that
    have lots of different "screens" shown


Returning to the second item, I think the key questions we should be 
asking about our document include:

 1. Does the methodology cover the situations arising from all of these
    websites?
 2. Is there significant functionality / parts of these sites we aren't
    reaching (statistical methodology)?
 3. Do all of the required parts make sense as required, the optional
    parts as optional? (and do all of the optional parts for all sites
    as part of evaluating them)
 4. Does the notion of templates make sense?
 5. Are the report(s) that we generate useful?  If so, useful to all of
    the potential "customers" of the methodology, or just some?  What
    changes to the reports might we make to make them more useful?
 6. Do our sampling methods (as they may have developed by TPAC) cover
    enough of the website to be reliable?


Going back to the first point - I think we should seek volunteer sites 
if possible, again making clear the purpose of our work.  I think we 
should NOT publish the problems found - though that may be a challenge 
when it comes to wanting to review the report results.  If so, we should 
keep the sites anonymous in anything we publish, unless the site owners 
give permission for that.  We do not want to even give the impression 
that the W3C is in the business of evaluating others' websites.  Which 
also reminds me - perhaps we might run this on the W3C site (again after 
checking in with W3C management); that might be something publishable...


Regards,

Peter

-- 
Oracle <http://www.oracle.com>
Peter Korn | Accessibility Principal
Phone: +1 650 5069522 <tel:+1%20650%205069522>
500 Oracle Parkway | Redwood City, CA 94065
Green Oracle <http://www.oracle.com/commitment> Oracle is committed to 
developing practices and products that help protect the environment

Received on Friday, 21 September 2012 17:59:26 UTC