Scripting Work for CSS Test Suites

So, Ming, you asked for a list of what scripting work needs to be
done for the CSS Test Suite and how long it'd take. Here's what's
on my list:

    A. Create test validation tools
        - Design a framework that can be reused, e.g.
            - for generating reports in HTML or plaintext
            - to qualify or filter tests during the build process
            - to automatically flag problems in the review system
        - Create validation functions for
            - filenames being in a valid format
            - filenames not being duplicated
            - titles not being duplicated
            - assertions not being duplicated
            - tests being well-formed XML, valid (X)HTML
                (call a RelaxNG validator)
            - tests using valid CSS (note: CSS validator has lots
              of bugs, so these would be warnings not errors)
            - other machine-checkable format-related stuff, see
                http://wiki.csswg.org/test/css2.1/format
            Note that some of these validation functions should not
            be run on all tests, they'll depend on flags in the test.
            E.g. 'invalid' tests would skip CSS validation.
        - Create a validation report generator that reports errors
          and warnings, broken down by test and by contributor and
          anything else we find useful. Set it up on test.csswg.org.

    B. Improve test indexing script to
        - be easily extended to other CSS modules
        - create short report as well as a more detailed test coverage
          report that is split into sections (so it's not a gigantic
          unwieldy file)

    C. Create global test sink and modify build scripts accordingly
        - Organize test source tree to facilitate "profiling" model
          of building tests. (All tests are in a global test suite
          space, each test suite is a subset of this collection.)
        - Create test copying script that only pulls tests that belong
          to a particular test suite, based on the <link rel="help">
          links.
        - Refactor build scripts accordingly.

    D. Pie in the sky: modify build scripts so that they can build
       incrementally: i.e. only build tests and support files that have
       changed since the last build. This would let us rebuild on checkin,
       giving contributors immediate feedback when they (re)submit a test.

I can't give an accurate estimate of how long these things would take.

One requirement I'm going to impose is that a templating system, not
'print' statements, is used to generate output. (My favorite system
is Perl's Template Toolkit, which we are already using in the build
scripts.) That keeps a clean separation between the system's logic
and its output formatting.

Whether the validation framework itself is written in Perl or some
other open-source interpreted language is not as much of a concern.

I have a draft of the filename validation in Python: the tentative
plan there was to dump validation results into datafiles and generate
HTML reports using Perl+TT in a separate process. I'm not convinced
that's the best way to go, especially given D, but duplication checks
do require a full pass before reporting any results...

~fantasai

Received on Tuesday, 7 October 2008 21:31:38 UTC