W3C home > Mailing lists > Public > public-test-infra@w3.org > April to June 2011

Re: Test case review

From: Linss, Peter <peter.linss@hp.com>
Date: Tue, 10 May 2011 09:15:01 -0700
Cc: "public-test-infra@w3.org" <public-test-infra@w3.org>
Message-Id: <3169F67B-DF06-4E4A-A2F0-DCE23884B8BA@hp.com>
To: James Graham <jgraham@opera.com>
On May 10, 2011, at 2:14 AM, James Graham wrote:

> (Splitting the threads)
> On 05/10/2011 12:55 AM, Linss, Peter wrote:
>> On May 9, 2011, at 2:58 PM, James Graham wrote:
>>> On Mon, 9 May 2011, Linss, Peter wrote:
>>>> Shepherd is designed to be a web interface tightly integrated
>>>> with our test suite repository. It'll facilitate reviewing,
>>>> approving, and bug tracking of the test files as well as adding a
>>>> query and editing system for the test case metadata. There are
>>>> plans to also allow some degree of direct creation and editing of
>>>> the tests in the web ui. It will also manage the layout of the
>>>> test source files within the repository and integrate with our
>>>> build system.
>>> That sounds interesting. Are there details anywhere? How tied is it
>>> to CSS-specific assumptions (one tets per file, metadata embedded
>>> in the test, etc.)
>> There are some notes on our wiki at:
>> http://wiki.csswg.org/test/review-system
> Interesting. I have been thinking about similar issues and was 
> considering a slightly different design. It seems that your approach is 
> highly specialized for reviewing *tests*. That seems too specific to me 
> because there are a great number of other things that one may have in a 
> test repository that are not really tests in themselves. For example in 
> the HTML5 test repository we have all of:
> * Test harness files such as testharness.js
> * Data files and scripts to generate test cases from the data files e.g. 
> the html5lib tests
> * Substantial pieces of javascript code that run tests from a small set 
> of (also javascript) inputs e.g. the meta reflection tests
> * Substantial pieces of javascript that provide a 
> reference-implementation of some API (e.g. the atob tests)

The CSS test repository contains files other than tests as well. Our test harness is not in the test repository, but we do have:
* files referenced from tests, ie: images and stylesheets
* reference pages (and files referenced from reference pages)
* scripts that generate tests
* apache server configuration files
* the test suite build code

So yes, Shepherd will be managing and providing a review system for files other than tests as well. It will do more with a test file, like handle the test metadata, but it will have to handle other files as well.

> All of these things need review. Therefore I would like a review system 
> that is rather more like ordinary code review. In particular the model I 
> had in mind was:
> * Contributor makes a number of commits
> * They create a review request for those commits
> * Any number of reviewers can create responses to the review request 
> where they provide comments on a specific set of lines in a specific 
> revision of the changed files
> * Contributor makes any necessary changes, makes a new commit, and adds 
> those commits to the review
> * Once the reviewers are happy with the changes, the review is marked as 
> approved, which causes the commits to be considered approved

Essentially the same workflow we have planned (and what we've been doing by hand and email without this system).
Received on Tuesday, 10 May 2011 16:15:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:06 UTC