- From: Till Halbach <tillh@opera.com>
- Date: Tue, 27 Mar 2007 16:35:30 +0200
- To: "Allen Wang" <Xiaozhong.Wang@sun.com>, "Dominique Hazael-Massieux" <dom@w3.org>
- Cc: public-mwts@w3.org
On Sat, 10 Mar 2007 00:31:58 +0100, Allen Wang <Xiaozhong.Wang@Sun.COM> wrote: > > Hi Dom, > > This is a great start! It has the the most important elements which I > think a harness should have. > > Here are my basic ideas about the harness: > > - Each tests have their IDs. > > - At the end of the test page, inserts a "Pass" and "Fail" link which > returns the result to the server and for the server to serve next test. > The server should dynamically include the test page and insert the links > at the bottom of the page. > > Your harness also has the "Cannot tell" button, which I think is good to > have too. However, I am not sure whether we should have links instead of > buttons. Does every user agent support buttons? Good point. Links are more basic than buttons, and experience says that button implementations are often buggy, so links are preferred. > > I was thinking that the links can be inserted before the </body> tag and > start with <p> tag to separate it from the test page. It seems that for > some of the pages, the navigation buttons get mixed with the test > content. > > - At the beginning of the test suite, ask users to input the user agent > information, like device model, browser name, version, etc. UA string as sent to the server would be ok, I guess. Till > > - At the end of the test suite, we should show a summary page to the > user. > > - The harness should have the capability of easily adding/removing > tests, for example, by changing a configuration file of the test suite. > > - Show progress of the testing, i.e., how many have been completed and > how many are left. > > There are also some advanced features which we can implement for the > next phase of the harness: > > - The harness should be capable of dynamically including test pages that > are retrieved from remote web servers. > > - At the end of the each test, provide a text field to let the user > describe why the test fails or cannot tell > > - Provide random access to the tests > > - Provide access to index of test suite and results of executed tests at > any point of testing. > > - Allow users to complete the testing in more than one sessions and let > users retrieve test results after disconnection > > Thanks, > Allen > > Dominique Hazael-Massieux wrote: >> Hi, >> >> Following-up on our discussions last week on setting up an experimental >> test harness that would allow to navigate through test cases and record >> results, I've set up such a script at: >> http://www.w3.org/2007/03/mth/harness >> >> At this time, only the CSS MP link is functional - the DOM one can't be >> used due to the way the Javascript displays the result. >> >> The harness drives the user through the set of known test cases, >> recording at each step which test case passes and which doesn't, leading >> to a results table such as: >> http://www.w3.org/2007/03/mth/results?ts=cssmp >> >> Of course, this is still very drafty and could use quite a few >> improvements; a few of the ideas that come to my mind: >> * allow to define more context for individual test cases; at this time, >> the context for each test case is very crudely defined; I would need to >> define what headers the content should be sent with (e.g. content type) >> * attach more metadata to the list of testcases to make the results >> table more interesting >> * bind the data to abstract user agents rather than to a unique user >> agent string (I'm thinking that WURFL should be able to help for this) >> * allow a given user to start from a given test case, giving hints on >> which test cases haven't been run on his/her device, or which test cases >> have received inconsistent results >> * allow to skip a test case (i.e. going from test n to test n+2) when a >> given test can't be run >> >> Feedback and suggestions welcome; please do keep in mind this is >> entirely experimental at this stage. >> >> Dom >> >> >> >> > -- Till Halbach Quality Assurance, Opera Software (www.opera.com)
Received on Tuesday, 27 March 2007 14:34:47 UTC