- From: Carlos Iglesias <carlos.iglesias@fundacionctic.org>
- Date: Wed, 29 Nov 2006 19:23:48 +0100
- To: "cstrobbe" <Christophe.Strobbe@esat.kuleuven.be>, <public-wai-ert-tsdtf@w3.org>
Hi group, Some comments inline: > * Step 3 says: "a task force participant receives the > assignment to carry out an initial review". Who does the > assignment? I here refer back to an earlier discussion (I > don't have a reference) where we said that people take test > samples in batches of e.g. 5 and review them. I think this problem appears before, at step 2: "a task force participant receives the assignment to carry out a review of the structure of a test sample." Additionally (if steps 3 and 4 are not merged), should the TF participant of steps 2 and 3 be the same just to keep things simple? > * The above response leads to the question how we record > which test samples are assigned to whom. We could do that by > means of an additional element in TCDL, but that may give a > misleading impression to outsiders (one person in charge of > the whole review process?), even if we make that element > optional, hide it from the web view and remove it at the end > of the process. IMO this information shouldn't be part of the Test Samples Metadata because it's just "administrative information" (it's not in the submitted Test Samples) without much interest once the Test Sample gets it's "final" status (although it could be relevant information to record for a proper follow-up of the process) > * In addition to who reviews a test sample in steps 2 and 3, > we need to record other metadata such as review comments, > proposals to accept or reject, and possibly metrics about the > extent to which a test sample meets the criteria in the checklists. > We could just send things to the mailing list, but then the > data may become hard to keep track off. > We could also use a Wiki (like the WCAG WG), for example with > a table where rows represent test sample and columns > represent TF participants (who's been assigned what), > contributor of the test case, review comments, links to > strawpoll results, etcetera. > If metrics are really important, a database seems more useful > (but also less flexible than a Wiki). I'm in favor of using something more structured than just the mailing list (Bugzilla, Wiki...) > * In step 4 (Online Strawpoll), should "Checklist for > Structure Reviews" be "Checklist for Content Reviews"? Also think so. > * If we use WBS forms for strawpolls, is it reasonable to > expect that every task force participants answers the > strawpoll, and do strawpolls have time limits? In WCAG WG, > strawpolls time out a few hours before the teleconference to > give the chairs sufficient time to prepare for the > teleconference. We could use the same approach in the task force. > We could also define a "quorum" for the strawpolls and decide > to reopen a strawpolls if the number of responders is too > low. (This proposal sounded good to people in the teleconference.) I'm in favor of this approach, but think we should keep an eye on the definition of "quorum" if we want to guarantee a good P2P review (i.e. if CTIC has proposed Test Cases, they should be reviewed by a minimum TF participants outside of CTIC and so on) > * Steps 3 and 4 are the same except for who does the review. > Can these steps be merged? IMO if the person who carry out the initial review at step 2 can't get the Test Sample back to a previous state if any problem is found (e.g. rejected?) then it makes no sense to have an initial individual review separate from the group review. Additionally, what is supposed to happen if we find any problems while checking the content. Shouldn't be any output options there (steps 4) more than pending? (e.g rejected again?) > * We should test the review process with a real test sample. > That would help us see, for example, if all the criteria in > the checklists are clear and unambiguous (e.g. "files are > valid in their use" and "no unintentional broken links"). I find the Structure Checklist pretty clear. In opposition, maybe could be hard to agree on unambiguous criteria for Content Checklist (e.g. minimal and complete, unambiguous unit, etc.) Regards, CI. -------------------------------------- Carlos Iglesias CTIC Foundation Science and Technology Park of Gijón 33203 - Gijón, Asturias, Spain phone: +34 984291212 fax: +34 984390612 email: carlos.iglesias@fundacionctic.org URL: http://www.fundacionctic.org
Received on Wednesday, 29 November 2006 18:23:59 UTC