- From: James Graham <james@hoppipolla.co.uk>
- Date: Tue, 22 Apr 2014 17:56:52 +0100
- To: public-test-infra@w3.org
On 22/04/14 17:22, Giuseppe Pascale wrote: > In no particular order, here a set of questions I've heard from various > people+some comments from me. Can you help address them? I would like to > invite other IG participants to chime in if I forgot something: > > 1. the first question was about "where to find information on the W3C > testing setup and material". Bryan tried to answer with the mail below. In > short it seems to me that the starting point is > http://testthewebforward.org/docs/. Please chime in if anything needs to be > added Yes, that's the right site. The idea is to centralise all the useful information there. Since it is documenting an ongoing software development process, I have no doubt that the documentation could be improved. One slight problem with our current setup is that the TestTWF docs are in a seperate repo, so it's easy to forget to update those docs when making changes to e.g. testharness.js. > 2. TTWF website points to this post ( > http://testthewebforward.org/blog/2013/02/20/testing-the-open-web-platform.html) > under "Want to learn more about our plans?". Is that post still 100% > accurate? If not, would be good to get an update post/page about what TTWF > is and what the plans around testing are. I agree an update would be useful. We have come a long way in the last year, and achieved many of the goals that Tobie set out. > 3. One other question raised was about process: how you submit tests, how > do their get reviewed approved etc. AFAIKS this is answered here: > http://testthewebforward.org/docs/review-process.html. One thing that is > not clear though is timing information and who does what. If an > organization/company was to submit 100s of test, who would be reviewing > them? Is there any guarantee those would be reviewed at all? Or would your > recommend to whoever intends to contribute tests to also contribute reviews? The situation is basically that no one is paid specifically to review tests. However some people have jobs that allow them to spend some time doing test review, and other people have been doing reviews in their spare time. This can make it hard to cope with large influxes of review items, particularly after TestTWF events. The data at [1] (look at the red area) shows we were making headway in reducing the backlog until the recent events. Obviously we would like to do better here, but the main problem is lack of people who are both qualified and inclined to do the work. For companies looking to contribute to web-platform-tests, putting effort into review is just as valuable as putting effort into writing tests. If people are going to submit hundreds of tests, it it worth knowing that there's a rule that internal review in a public location may be carried forward to the test repository. For example, if a Mozilla developer makes a patch that includes some code changes and adds some web-platform-tests, a r+ in Bugzilla for the code+test changes would be enough to land the test changes in web-platform-tests without requiring a second round of review. Obviously if this doesn't work well for some entities (i.e. if people start landing low-quality tests on the basis of such "review") we will start a blacklist. That hasn't been a problem to date. > 4. Let's assume some organizations/companies decide the contribute to the > W3C effort. What are the plans when it comes to maintaining the test that > gets submitted? Are there processes in place to make sure that if spec > changes, tests are "invalidated"? In other words, how can I know, at any > given time, if a test suite for a given spec is still valid? And who is in > charge to check that tests are still valid when a spec gets updated? Also, > are there way to "challenge" a test, i.e. to say that a given (approved) > test is in fact invalid? It's very hard to automatically invalidate tests when the spec changes. Even if we had lots of metadata linking tests to spec sections — which we don't — it is quite common for a test to depend on many things other than that which it claims to be testing. And requiring a lot of metadata adds an unacceptable overhead to the test authoring process (I have seen cases where people have has testsuites, but have refused to submit them to common testsuites due to metadata overheads). In practice the way we expect to deal with these things is to have implementations actually run the tests and see what breaks when they are updated to match the new spec. > 5. IIRC not all WGs are using the process/tools from TTWF. Is this > documented somewhere? Will these other groups continue with their tool for > the time being or is there any plan to merge the various efforts at some > point? CSS, at least, currently use different repositories. I think there is a considerable advantage to everyone sharing the same infrastructure, but at the moment there are not concrete plans to merge the repositories. > 6. Do the test include metadata that easily allow to (at the very list) > extract relevant test for a given spec? Are these > mandatory/checked/maintained? The directory structure reflects the structure of the specs; each spec has its own top level directory and subdirectories within that correspond to sections of the spec. Some tests include more metadata, but this is not required. Where it has been added I expect it is often wrong. I am much more interested in finding ways to automatically associate tests and parts of specs e.g. by instrumenting browsers to report which apis are called by each test, or by looking at code coverage. > 7. Some people expressed an interest to organize a TTWF event (or something > similar) dedicated for TV, to better discuss and understand the needs of > the TV industry. Do you think this would be doable? Is there already a > calendar or next events and/or open dates? How would you recommend we go > about this? (and do you think it would be useful in the first place? I am not the best person to answer this, but typically TestTWF events have been about actually writing tests rather than discussions around test writing. Having said that, some kind of event sounds like it could be sensible. Previously the events have often been organised around some existing W3C meeting so there is a high concentration of people to act as "experts" already in the area. However it sounds like you might want a slightly different kind of event, so this might not be the right arrangement. Feel free to chat on irc if you have any more questions. [1] http://testthewebforward.org/dashboard/?#all
Received on Tuesday, 22 April 2014 16:57:20 UTC