- From: Bob Lund <B.Lund@CableLabs.com>
- Date: Wed, 19 Mar 2014 12:46:02 +0000
- To: James Graham <james@hoppipolla.co.uk>, "public-test-infra@w3.org" <public-test-infra@w3.org>
Hi James, One of the original requirements, and something I think is useful, would be the ability to identify tests to run by an input file where you can explicitly identify multiple directories, multiple test cases in a directory, or exclude test cases in a directory that has been selected. Otherwise, I liked the new tool output more than the previous tool, especially the JSON output. Thanks, Bob Lund ________________________________________ From: James Graham [james@hoppipolla.co.uk] Sent: Wednesday, March 19, 2014 4:47 AM To: public-test-infra@w3.org Subject: Re: What's happened to the test framework On 18/03/14 23:10, SULLIVAN, BRYAN L wrote: > Well first, I do have the requirement that existing infrastructure > not just disappear without notice... I am doing what I can to promote > awareness and support of the TTWF program, and to have the only > public, comprehensive view of the test framework just disappear does > not instill confidence in those that would consider using and > supporting this work. I understand that the lack of prior notice here is suboptimal. I don't understand what a "view of the test framework" is. Do you just mean a list of the tests? It should certainly be possible to generate such a list > Second, the overall requirements are already documented on the wiki > http://www.w3.org/wiki/Testing under the task force pages. I am asking specifically for *your* requirements. I really want to know what you are trying to do so that we can work out how best to meet your needs. > Looking at [1] from a user perspective, I have absolutely no idea > what to do. How do I know that this test runner allows me to do, and > how to do it? There isn't even a link to a user guide. This is not > something that we can promote realistically. The prior framework at > least provided a clear overview of what tests were available, plus a > lot of extra resources. It needed work for sure, but wiping it away > is not progress. As previously stated it's very rough, and the UI does need some work. However I think it's not too hard to guess that the button marked "start" starts running the testsuite. There are also controls to select which kinds of tests you would like to write and for giving a prefix match on the paths of the tests to run, mainly so you can run tests from a single specification. When the tests are done you can get the results back in JSON format (modulo some browser bugs that Robin has a patch to work around). That's pretty much all the tool does. There is also a script (which is truly undocumented) for creating something like an implementation report using the JSON output from > 1 browser (this is the report.py script in the same directory as the runner; it is expected to be run from the command line, but building a web frontend would be trivial). So, apart from documentation, what use cases do you have that are not met by the current tool?
Received on Wednesday, 19 March 2014 12:46:37 UTC