- From: Ms2ger <ms2ger@gmail.com>
- Date: Wed, 07 Aug 2013 11:22:51 +0200
- To: Dirk Pranke <dpranke@chromium.org>
- CC: public-test-infra <public-test-infra@w3.org>
On 08/07/2013 02:43 AM, Dirk Pranke wrote: > Hi all, > > I am a relative newcomer to this group but I have been working off and on > recently (quite a bit just now) getting the tests running as part of the > automated tests for Blink and WebKit. > > I believe I'm probably missing quite a bit of context or history that makes > it difficult for me to understand some of the design decisions and > processes around getting tests written and submitted and run. > > So forgive me if this sounds like a brash question, but it's honestly one > coming from ignorance and not meant to be snarky ...: > > Who actually currently runs these tests, and how? > > As far as I know, no one in Blink (or WebKit) regularly runs any of these > tests, even manually, with a few exceptions where we have manually imported > some suites into our existing repos. It may also be the case that some > times individual developers or spec editors have run some of the tests. >>From my limited conversations w/ Fantasai, I believe the situation is > similar for Mozilla. I do not know about efforts inside Microsoft or at > Opera, or at any other browser vendor or third party. > > Are there groups that actually do attempt to run the tests somehow on the > different browsers? Does that somehow happen in Shepherd in a way I don't > know about (or understand)? > > I would like to be able to usefully contribute to threads like > "consolidating css-wg and web-platform-tests repositories" and talk about > the pain points I'm hitting as I try to get the tests running, but it's > hard for me to say useful things w/o knowing more about how others are > using all of this. So, I'm looking to become educated? Mozilla imports a subset of the web-platform-tests repository into its main repository (based on the MANIFEST files you might have noticed). The importing is completely scripted, including writing the annotations for failing tests. The code for that is available [1]. Those tests are run both in automation and by developers locally from there. I understood Ryosuke Niwa was working on importing tests in WebKit before the fork; have you talked to him? There are various tools to run tests semi-automatically [2-4]. The Shepherd tool [5] is a Test Suite Manager used by the CSS WG for reviewing; there are some review comments there, but in my personal experience those are largely ignored. This may be because there is no value in getting tests reviewed; the CSS WG only cares about tests to get specifications to CR, and for that purpose, unreviewed (and probably incorrect) tests are used. The group working on the web-platform-tests repository seems to primarily have another goal: improving interoperability of the web platform (between web browsers in particular), with little interest in the W3C process. The fact that most of those specifications are primarily developed in the WHATWG and a lot of contributors work in browser QA may be factors in that. To achieve that goal, we have imposed a review-then-commit policy, which introduces a bigger incentive for reviewers: if tests are not reviewed, they're not in the main repository, and they're harder to run / import / ... I hope this helps to sketch the current playing field in web standards testing; do not hesitate to get in touch if there's anything else I can help with, on this mailing list, or on IRC (irc.w3.org/testing or irc.freenode.net/whatwg). Ms2ger [1] http://mxr.mozilla.org/mozilla-central/source/dom/imptests/ [2] https://bitbucket.org/ms2ger/test-runner [3] http://www.w3c-test.org/framework [4] https://test.csswg.org/harness/ [5] http://test.csswg.org/shepherd/
Received on Wednesday, 7 August 2013 09:23:21 UTC