Re: Documentation update

On Jan 11, 2012, at 7:09 AM, Robin Berjon wrote:

> Hi Peter,
> 
> thanks for clarifying things, I've updated the docs accordingly.

Sure, upon further review some of the other fields need better descriptions. In general, the test metadata should be present in the test file itself and follow the guidelines set forth in the CSS test suite:
http://wiki.csswg.org/test/css2.1/format

Specifically, the title should be more than the file name; the flags should be used to describe the requirements of the test or an attribute of the test significant to the framework, not the test itself, e.g. the fact that certain browser features are assumed to be present, or the required format(s) of the test; the assertion should not be the name of the test, but rather a list of the assertions in the specification tested.

Also, a planned augmentation is to allow the spec links to point to any anchor in the spec to allow narrower targeting of testable assertions. this will go hand in hand with additional markup in the spec to identify those assertions in an automated way.

And you need to escape the '<' and '>' in the credits examples.

> 
> On Jan 10, 2012, at 19:41 , Linss, Peter wrote:
>> Note that the test suite build system from the CSS test suite automatically generates this manifest file with all the data in place. At some point this code will be generalized to be used for other suites as well.
> 
> Well the next part of the work that's on my plate right now is the integration of several existing test suites into the framework, so if there's an existing automated system to adapt it might be a more productive use of my time if I just went ahead and adapted it :) Do you have pointers and a list of things that need to be done to generalise it? I don't necessarily need much, just the general direction.

The test suite build code is at: 
http://hg.csswg.org/dev/w3ctestlib/

There isn't a comprehensive list of work for it yet (except a collection of notes on my machine and a bunch in my head), but I did set up a dev tracker for it (and the other test framework projects) at:
http://dev.csswg.org/

It's not really populated yet, but I've started entering a few items here and there.

The build code it also tightly coupled to the Shepherd project which uses it to extract (and eventually manipulate) the test metadata. I don't think it's at a point where it makes sense to hand off much of the work yet, I think we'd be stepping on each others toes too much. But hopefully soon.

A good first step to be able to use the build system on other test suites is more likely to adapt the format of the tests to that expected by the build system (documented in the link above). Mostly significant is the test metadata. Note that when the CSS format guidelines were written the expected test input format was xhtml (the build system generates XHTML, HTML and XHTML-Print versions from that), the build system can now accept HTML5 and XML input formats as well. It also has hooks to allow the metadata to be placed into a sidecar file in xml format (the same file name as the test with a .meta extension). The fun part is going to be extracting metadata from script tests (which will require executing the script in the build process). In addition to the manifest generation and format conversion, the build system gathers tests from multiple source directories and consolidates into one output directory, and generates human readable index pages. 

A planned change to the build system is to be able to build multiple test suites at once and use the specification links in the meta data to automatically place tests (and their related support files) into the proper test suite(s).

> 
>>>  • Specifications can be imported from the CLI using a spec manifest — is this exposed to the Web interface? (I don't think so).
>> 
>> I don't believe so. I wrote all the CLI import scripts, Mike wrote the Web interface versions. 
>> 
>>> Should it be?
>> 
>> Possibly, or the web interface should be expanded to include the functionality.
> 
> On the face of it It would seem to be straightforward to add manifest support to the Web interface, I'll add it if there's demand for it.
> 
>>>  • The test runs I could find seem to necessitate manual intervention (you see each test one by one, submit the result you see). Yet I see a number of mentions of automated test suite submission. Is this something that's planned for but not supported, or did I miss something? Is there agreement on how it ought to work?
>> 
>> There's some experimental support for automatic submission of script tests that use test harness.js. This will be expanded on. At some point there may also be automatic comparison of reference tests.
> 
> I've found the autosubmit stuff, I couldn't see it because I thought the suite I was looking at was using harness.js when it wasn't. And it seems that so long as harness.js is used, there's no need to document anything specific — it should just work.
> 
> I was also wondering if we need to coordinate to keep our code bases in sync? It seems that right now there are no changes in your version that aren't in the W3C version, but the reverse isn't true (notably, at least one major bug has been fixed).

Yes, we do. My current plan is to back port the work you an Mike have done to my codebase, then back port some of the work from the Shepherd project into the harness (the use the same core library), mostly the login code (I'm also going to integrate the LDPA support in a cleaner fashion) and UI cleanups. then you can re-sync your codebase from mine. After that I plan on improving the auto-submit code for scripted tests.


Peter

Received on Wednesday, 11 January 2012 16:55:24 UTC