- From: Carmelo Montanez <carmelo@nist.gov>
- Date: Mon, 12 Mar 2007 14:38:10 -0400
- To: Dominique Hazael-Massieux <dom@w3.org>, public-mwts@w3.org
Hey Dom: This is great work. A few observations: 1) Will be good to add an option to terminate the execution and get the results at that point or perhaps an option to view the report at any one point. 2) Use extra metadata to augment the report. 3) Display the test name as the harness run. 4) Perhaps the ability to run multiple tests on the same window. Please let me know if I can be of help during this development. Thanks, Carmelo At 12:40 PM 3/8/2007, Dominique Hazael-Massieux wrote: >Hi, > >Following-up on our discussions last week on setting up an experimental >test harness that would allow to navigate through test cases and record >results, I've set up such a script at: >http://www.w3.org/2007/03/mth/harness > >At this time, only the CSS MP link is functional - the DOM one can't be >used due to the way the Javascript displays the result. > >The harness drives the user through the set of known test cases, >recording at each step which test case passes and which doesn't, leading >to a results table such as: >http://www.w3.org/2007/03/mth/results?ts=cssmp > >Of course, this is still very drafty and could use quite a few >improvements; a few of the ideas that come to my mind: > * allow to define more context for individual test cases; at this time, >the context for each test case is very crudely defined; I would need to >define what headers the content should be sent with (e.g. content type) > * attach more metadata to the list of testcases to make the results >table more interesting > * bind the data to abstract user agents rather than to a unique user >agent string (I'm thinking that WURFL should be able to help for this) > * allow a given user to start from a given test case, giving hints on >which test cases haven't been run on his/her device, or which test cases >have received inconsistent results > * allow to skip a test case (i.e. going from test n to test n+2) when a >given test can't be run > >Feedback and suggestions welcome; please do keep in mind this is >entirely experimental at this stage. > >Dom
Received on Monday, 12 March 2007 18:54:59 UTC