Re: Followup to "Supergroups" message to AC Forum

> We've been discussing the Testing issue forever and it's time to
> acknowledge the fact Working Groups and WG Chairs have no control
> on the CR->PR phase since it mostly depends on the availability of
> a Test Suite and on implementation experience, contributions fully
> in the hands of the Membership.

I mostly agree. Chairs can't make tests happen. But chairs do have a say in
making sure that their WG is receptive to tests and puts as little friction
and as much encouragement as possible in the test writing/submission/review
process. I don't mean this a criticism of any chair past or present, just
that this is a topic that requires continuous attention, and that from tools
to processes to social interactions, making sure friction is low is just
as important (if not more, given the ongoing problem) for tests as it is
for specs.

>> Among many other things, there's also probably an issue of social prestige difference. If you do spec works, you get to attend all these nice(?) meetings and fly around the world. If you do test work, the action's over here on github. If you lead spec work, you get to put your name at the top of a fancy document the whole world will look at. For tests, authorship can be found in commit logs... I am certainly not claiming that prestige is the main reason people join a working group or do spec work, but I would not be surprised if it did play a role in tilting the balance.
> 
> I'm not sure this is the real problem behind Testing.

I don't think this is *the* problem. I think it is a factor among many, and you listed a bunch of others (which I cut from this answer not because I disagree, just because I am talking about something else). Like the others problems, I think this should be addressed. Here are a few random ideas to give not prestige, but at least visibility:

  * just like every spec has an editor in charge of the spec, every spec should have someone explicitly in charge of the test suite. I can be the same person as the editor, but it certainly doesn't have to be. 

 * Every spec should have at the very top a prominent link to the corresponding test suite, even if it is empty.

 * Chairs could make it clear that "I am not sure how to test ***" or "a test was submitted for review, I am not quite sure what to make of it and want second opinions" are just as valid telecon / F2F topics as the equivalent spec topics.

I think framing tests as a high profile deliverable for which someone needs to step up and lead to completion will give a different dynamic compared to tests being one of these overhead thing for which we need to designate volunteers. Again, not a silver bullet, but I think it would help.

> One thing remains and Florian is right there: writing tests is less
> fun that writing a spec or coding and Product Managers tend to allocate
> time for internal tests but rarely for W3C tests.
> 
> I think we made two mistakes here:
> 
> 1. when we collapsed LCWD and CR into one single step, we should have
>   kept WD and not R. The words "Candidate Recommendation" give a
>   signal the document is approaching completion, which is clearly not
>   the case if nobody's working on a Test Suite.

I don't know if this would make much of a difference. Whether you like them or not, "Living standards" have shown that many people are perfectly willing to implement things that have nothing near the word "Recommendation" on them.

> 2. we are extremely bad at regressing documents on the REC track.
>   There should be a six months limit to CR on the REC track and any
>   document reaching six months + one day should automatically go back
>   to WD if the review to PR is not already started. All documents,
>   automatically, no exception.

Maybe. I think there is much to do in terms of carrots before we need to resort to sticks.

 - Florian

Received on Tuesday, 21 June 2016 09:05:01 UTC