Summary of discussion on TSD TF review process

Hi,

I had an action item to send a summary of the discussion during the 
last teleconference to the list.
Several people said the proposal looked good and was well structured. 
Below are some comments and questions.

* Step 3 says: "a task force participant receives the assignment to 
carry out an initial review". Who does the assignment? I here refer 
back to an earlier discussion (I don't have a reference) where we said 
that people take test samples in batches of e.g. 5 and review them.

* The above response leads to the question how we record which test 
samples are assigned to whom. We could do that by means of an 
additional element in TCDL, but that may give a misleading impression 
to outsiders (one person in charge of the whole review process?), even 
if we make that element optional, hide it from the web view and remove 
it at the end of the process.

* In addition to who reviews a test sample in steps 2 and 3, we need to 
record other metadata such as review comments, proposals to accept or 
reject, and possibly metrics about the extent to which a test sample 
meets the criteria in the checklists.
We could just send things to the mailing list, but then the data may 
become hard to keep track off.
We could also use a Wiki (like the WCAG WG), for example with a table 
where rows represent test sample and columns represent TF participants 
(who's been assigned what), contributor of the test case, review 
comments, links to strawpoll results, etcetera.
If metrics are really important, a database seems more useful (but also 
less flexible than a Wiki).

* In step 4 (Online Strawpoll), should "Checklist for Structure 
Reviews" be "Checklist for Content Reviews"?

* If we use WBS forms for strawpolls, is it reasonable to expect that 
every task force participants answers the strawpoll, and do strawpolls 
have time limits? In WCAG WG, strawpolls time out a few hours before 
the teleconference to give the chairs sufficient time to prepare for 
the teleconference. We could use the same approach in the task force. 
We could also define a "quorum" for the strawpolls and decide to reopen 
a strawpolls if the number of responders is too low. (This proposal 
sounded good to people in the teleconference.)

* Steps 3 and 4 are the same except for who does the review. Can these 
steps be merged?

* We should test the review process with a real test sample. That would 
help us see, for example, if all the criteria in the checklists are 
clear and unambiguous (e.g. "files are valid in their use" and "no 
unintentional broken links").


URLs:
* 28 November 2006 minutes: http://www.w3.org/2006/11/28-tsdtf-minutes
* http://www.w3.org/WAI/ER/2006/tests/process : WCAG 2.0 Test Samples 
Development Task Force (TSD TF) Review Process
* http://www.w3.org/WAI/GL/WCAG20/tests/ctprocess : Conformance Test 
Process For WCAG 2.0

Best regards,

Christophe

-- 
Christophe Strobbe
K.U.Leuven - Departement of Electrical Engineering - Research Group on 
Document Architectures
Kasteelpark Arenberg 10 - 3001 Leuven-Heverlee - BELGIUM
tel: +32 16 32 85 51
http://www.docarch.be/ 

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Received on Wednesday, 29 November 2006 11:24:55 UTC