Bottlenecks in the Rec track

The policy that would be the most difficult to change in our operations
is the Patent Policy; I propose that for the time being we consider it
as an unalterable constraint.

There is a window of opportunity to update the process document
(although the updates I assume will be constrained by what the Patent
Policy assumes of the process); and we have pretty much entire freedom
to modify internal rules (promulgated by the Team) and WG conventions.

Starting from that, the main schedule constraint for our work is that
the Patent Policy imposes a first window of irreducible 150 days between
FPWD and Recommendation; each Last Call after these 150 days create a
new 60 days window. 

So within the current Patent Policy, we could move from FWPD to Rec in
150 days; given how far we are from these types of schedule, I think
it's safe to say that this is not a constraint we need to look at in the
short term.

The 60 days window after Last Calls is one that we hit more frequently,
in particular due to post-CR Last Calls (which is being discussed also
on the unfortunately member-only Chairs mailing list). There might be
room for improvement in that space.

But looking at the overall standardization process, the stages that
create bottlenecks most often are:
* Last Call, due to a flood of comments
* CR, due to the need to build test suites

I think reducing the scope of our specs (as I've alluded in a previous
message) would go a long way toward reducing the time required for these
steps. But I have a couple of other suggestions that might also help.

Review flood
-------------
Every WG that wants or need to review another group work wait until Last
Call, because that's the only signal we know how to use: we know the
spec is stable, and we know we need to pay attention to Last Calls.

A much better process for reviews would be:
* at FPWD time, the producing Working Group contacts the Working Groups
that they want (or need) the reviews from, and requests they identify
which aspects of the spec they'll want to reivew

* the said WGs can take a high level view of the spec, and identify the
general aspects that are likely to be relevant to them; they probably
should at the same time give rough guidance or requirements that the
producing WG should follow

* as soon as the work on the said aspect stabilizes (i.e. the features
it relates to are equivalent to a "Last Call"), the producing WG asks
the relevant WGs their reviews

Using such a process, in many cases a WG could reach Last Call with a
much lower number of required reviews (and as a result, a likely lower
number of comments).

Testing at CR
-------------
Most groups start working on their test suite at Last Call (if not CR),
because the spec is too unstable before that, and the test cases would
need to be updated too frequently.

But in practice, there is a number of useful tasks that can be conducted
well before the spec stabilizes, and reduces the cost of starting the
test case development work:
* define a test harness and the test development/contribution process
* work with implementors that are prototyping the spec, and ensure the
tests they might develop in the process can be integrated in the said
harness
* start working on test cases for features as soon as the said feature
seems to reach stability

Now, that last point may mean that some work on test cases will have
been partially wasted (e.g. if a purportedly stable feature is removed
or changed entirely); but unless that work slowed down the work on
getting to CR, this remains at worse a zero-loss in the schedule (and in
most cases, a net gain since the experience gathered in building the
said cases would probably have had a positive impact on the spec
itself).

I think the test harness definition and identification of candidate
implementers can be done as soon as FPWD (probably sooner), since by
then you get a reasonably good idea of what the tech will consist of.

In my experience with testing, a lot of the difficulty is to get
contributors, and a lot of that difficulty is to get people to
understand how to build good test cases, and what test cases are needed.
These difficulties should be addressed well before the actual work of
test development.

WebApps is experimenting with the role of test coordinator, to which
these missions would be a natural fit.

Dom

Received on Tuesday, 20 March 2012 14:21:56 UTC