Features vs specs

(this is very long; I think it is important too, so I hope my readers
will excuse the length)

Hi,

To improve our process, (and by that, I guess we mean our
standardization process), it is probably reasonable to assert that we
need to understand its goal.

After some thinking, it seems to me that the goal of the overall W3C
standardization process is “to get new features interoperably deployed
on the Web on a Royalty-Free basis”.

If (possibly a big "if") that goal is a fair characterization, our
process and day-to-day operations are focused on specs, while our goal
is focused on implemented features. In other words, we focus on the mean
to our end (the specs), instead of the end itself (implemented
features). I believe this chasm is the root of a large number of
difficulties in our process (incl. delays, reviews difficulties, etc).

I'll take a few examples on how this hurts us.

Normative dependencies delays
---------
We have a policy that a spec can't move past PR until all its normative
dependencies (presumingly in W3C) are at least at CR.

The reasoning behind that policy is that if something you depend on is
not stable, the interpretation of your spec is not stable either.

In practice, very few specs depend on the entirety of another spec. In
cases that have hurt the most recently, a large number of specs have
dependency on HTML5 and WebIDL; but in many (if not all) cases, the
dependency on HTML5 is on a well-defined sub-part of the spec; likewise,
most specs depend on a subset of WebIDL rather than its entirety. 

With a granularity of dependency looked at the spec level, we create a
huge barrier for progress, in cases where in practice there is already
likely a very high level of interoperability; instead of focusing
efforts on the specific pieces of the dependency where we need to look
at potential interoperability bugs (i.e. well-defined subsets of the
said specs), we insist that everything in the said spec needs to
progress on the standardization process.

The recent policy that loosens the requirement on referencing HTML5 is a
step in the right direction, but I believe we would benefit greatly by
making ti default policy rather than the exception.

Review process
--------------
Most groups who review other groups' specs (e.g. I18N or WAI) wait until
Last Call to make their reviews, since they don't want to review stuff
that is likely to change a lot.

But in practice, many of these reviews will focus on a subset of the
spec rather than its entirety: I18N will focus on features that have an
impact on internationalization, WAI on the ones that have an impact on
accessibility.

Some of these features have a stabilized design much earlier than the
rest of the spec; but it won't be reviewed until the entirety of the
spec stabilizes.

Testing
-------
Similarly, most groups wait until CR (in the best cases they'll start at
LC) to develop their test suites; but again, many of the relevant
features have been stable for a very long time when that work starts. In
other words, it could have started much earlier if we had identified
that stability.

Process for handling changes
---------
The way the process is currently applied implies that each time a
post-LC spec is substantially changed (which I think is interpreted as
affecting how an implementation would work), it needs to go back to LC.

In most cases, the change only affect a couple of features of the spec;
but the whole spec goes back to LC, and the whole specs can get new
comments that need handling/reviewing, etc.

Also, some of these changes are deemed substantial, but aren't in
practice: I think a better definition of substantivity would relate to
whether it impacts *interoperability* rather than conformance. In other
words, if a spec changes to reflect what most implementations are
already doing (and if the group agrees this is a reasonable thing to
do), that given change doesn't reduce but instead increases
interoperability, and as such shouldn't need to be "punished" by going
back to LC.

Stability enforcement
-------
One of the strongest policy we enforce on our specs is the stability of
their dated version; the reasoning for it is to facilitate reviews,
implementation schedules, etc.

In practice, these freezes could be a lot less drastic if we approach it
as a feature level: implementers/reviewers could ask for a specific
feature to be under freeze (while they implement/review), without
requiring the whole document where that feature is written up to be
frozen.

RF commitments
--------------
Because our process focuses on specs that bundles features of very
different level of deployment, some very well-deployed features don't
get RF protection until much after their deployment. This creates big
risks (for implementers, but also for developers).

What's a feature
-----------------
Now the $1B question is: what's a feature? I believe the answer is again
to be driven by our goal: a feature is something that several
implementors are working on releasing (or have released). 

Obviously that's not a 100% waterproof solution, but it's probably in
many cases a good guide for a group to determine what "features" it is
working on. And in most cases, a spec is defining a lot of features
(that vendors will in most cases deploy at difference paces).

Should we align specs and features?
---------------
In some cases, we might; I believe the good progress of the Web
Performance WG is bound to the fact that they have been able to do that.

But it's probably not always possible to do it; also, the current
overhead of publishing a spec probably doesn't make it a very tempting
option (for now at least).

This mail is long enough; I'll put in a later message (maybe tomorrow)
some of the ideas that I have to help bridge that chasm.

Dom

Received on Monday, 19 March 2012 16:31:37 UTC