- From: Tobie Langel <tobie@w3.org>
- Date: Sun, 18 Aug 2013 18:20:10 +0200
- To: Peter Linss <peter.linss@hp.com>
- Cc: Ms2ger <ms2ger@gmail.com>, public-test-infra@w3.org
On Wednesday, August 7, 2013 at 6:44 PM, Peter Linss wrote: > On Aug 7, 2013, at 9:07 AM, Ms2ger wrote: > > On 08/07/2013 02:20 AM, Peter Linss wrote: > > > My fundamental point here is that if the test suite can't get a spec > > > out of CR, then it has little utility to the WG developing the spec, > > > who, at the end of the day, needs the specs to advance if they want > > > their charter to get renewed or be able to work on "the next cool > > > thing". Building an entirely new testing infrastructure that can't > > > get a spec out of CR is, IMO, a big waste of time, and not something > > > I'm signing on to help with. Don't get me wrong, I see the value of > > > testing regardless, but if getting specs past CR isn't a primary > > > focus of *this* effort, then we have a serious problem. > > > > FWIW, it's very much not a focus for me, and I don't see how that's a problem. It may not be of a lot of utility to some WG, but the important part is (IMO) the utility to the interoperability of the platform. > > It's not a problem for me if some individuals and even some WGs don't have getting to REC as a primary focus of their testing effort. > > My problem is that if the goals and focus of this work doesn't _allow_ the CSSWG to use these tests and infrastructure to get our specs to REC, because some of our fundamental needs aren't being addressed, then we're going to be forced to withdraw from this effort and continue to develop our own tools that do. I would consider than outcome an epic fail for this project. While the main goal of this new testing effort is NOT to help WGs move specs along the REC track, but to improve the quality and interoperability of user agents, I do not see why these these two goals should be mutually exclusive. Quite the contrary, actually. More focus on testing, better testing tools, documentation and processes should help WG move specs along the REC track faster. Similarly, a process that's friendly to the requirements of working groups and the need to move specs along the REC track should incentivize WGs to produce more and better tests. Everybody wins. To the best of my knowledge, most working groups--with the notable exception of the CSSWG--haven't developed a dedicated testing toolchain to help them move specs along the REC track so far. (As a sidenote, that hasn't prevented anyone from shipping specs so far, so lets not get carried away: nothing here will disallow groups from moving the specs along the REC track.) These groups with little if any existing testing infrastructure will benefit from this new testing effort most. They are not burdened by technical-debt and can embrace a new, Git/GitHub-based solution easily. For the CSSWG, and for other groups with an already strong investment in a predating solution, the story is unfortunately a bit different and the transition might prove more challenging. Here's why: the requirements to move specs to CR are the same for all WGs (these are set by the W3C process and the Director), but up to now the **methods** used to meet this requirements were group-specific. As part of this testing effort's goal is to consolidate testing practices across WG, adjustment of some of these methods will undoubtedly prove necessary. Note that while we've been able to considerably streamline processes for the contribution/review workflow, we haven't tackled meeting the specific needs of moving specs along the REC track yet. Those of you that have an interest in this area are welcome to brainstorm and experiment. IMHO, the best solution will be lightweight, close to the regular Git/GitHub workflow, and share common infrastructure and processes with the rest of the testing project (e.g. piggy-back on the WebDriver/Saucelabs test runner to gather data for implementation reports) and across WGs. The HTML WG has experimented using a dedicated branch for this. This is certainly a move in the right-direction, but I'm sure we can go deeper than that. That said, WGs are free to continue building their own tools and processes, and even--as you threatened--to withdraw from this effort altogether. But the desire of a particular WG to continue working with its group-specific solutions rather than adopting common ones should be measured against two things: 1) The inherent cost for the group to build a dedicated testing solution. WG are meant to produce new technology, not test harnesses and processes. 2) The cost to the overall testing project (e.g. in added complexity) and beyond that to the interoperability of user agents and to the overall quality and appeal of the OWP. Walking separate ways would be remarkably silly. Let's not do that. Let's also try to avoid bringing it up as a nuclear option, it's not helping make progress and just deepens the divide. Similarly, I think it would be beneficial for those of us whose main gaol is interoperability, to bear in mind the requirements of the REC track process and the benefit of the IP protection it provides. There's enough overlap for this to be mutually beneficial. In order to move forward, I suggest the CSSWG should collect a comprehensive list of what it considers requirements of this testing effort. Once this list exists, we can go through it and agree on which of those requirements are shared across WGs, which ones ares specific to the CSSWG and which ones are actually more methods then they are requirements. >From there we'll be able to define which sets of tools and process can be shared, which ones need to continue being built by the CSSWG for its exclusive use, and which ones need to be modified for the greater good and the long terms benefit of all involved parties. Thanks, --tobie
Received on Sunday, 18 August 2013 16:20:21 UTC