AW: Features vs specs

Hi Dom,

I am glad to chimed in, because you do have a very good grasp of the process.

And I really like your points.

My proposal for experimental specs at the last AC meeting, which I guess in the end led to the establishment of this group, pretty much rests on what you explain to be a feature based focus.

That is exactly what I meant, but you said it better.
There are always portions of a spec that are stable enough.

I had suggested perhaps simply labeling these "features" as stable such that implementers can move on with a higher degree of certainty that that portion will not change anymore.
Will there be 100% security of stability? 
Of course not, but that would be unreasonable.
You expanded that into the review and testing portion, which is absolutely correct.

In that sense, getting to your question to what a feature is, I propose applying that label to any given section of a spec that the WG views as a feature and may declare stable out of their own volition.  They would know best and could decide best.

Kai



> -----Ursprüngliche Nachricht-----
> Von: Dominique Hazael-Massieux [mailto:dom@w3.org]
> Gesendet: Montag, 19. März 2012 17:31
> An: public-w3process@w3.org
> Betreff: Features vs specs
> 
> (this is very long; I think it is important too, so I hope my readers
> will excuse the length)
> 
> Hi,
> 
> To improve our process, (and by that, I guess we mean our
> standardization process), it is probably reasonable to assert that we
> need to understand its goal.
> 
> After some thinking, it seems to me that the goal of the overall W3C
> standardization process is “to get new features interoperably deployed
> on the Web on a Royalty-Free basis”.
> 
> If (possibly a big "if") that goal is a fair characterization, our
> process and day-to-day operations are focused on specs, while our goal
> is focused on implemented features. In other words, we focus on the
> mean
> to our end (the specs), instead of the end itself (implemented
> features). I believe this chasm is the root of a large number of
> difficulties in our process (incl. delays, reviews difficulties, etc).
> 
> I'll take a few examples on how this hurts us.
> 
> Normative dependencies delays
> ---------
> We have a policy that a spec can't move past PR until all its normative
> dependencies (presumingly in W3C) are at least at CR.
> 
> The reasoning behind that policy is that if something you depend on is
> not stable, the interpretation of your spec is not stable either.
> 
> In practice, very few specs depend on the entirety of another spec. In
> cases that have hurt the most recently, a large number of specs have
> dependency on HTML5 and WebIDL; but in many (if not all) cases, the
> dependency on HTML5 is on a well-defined sub-part of the spec;
> likewise,
> most specs depend on a subset of WebIDL rather than its entirety.
> 
> With a granularity of dependency looked at the spec level, we create a
> huge barrier for progress, in cases where in practice there is already
> likely a very high level of interoperability; instead of focusing
> efforts on the specific pieces of the dependency where we need to look
> at potential interoperability bugs (i.e. well-defined subsets of the
> said specs), we insist that everything in the said spec needs to
> progress on the standardization process.
> 
> The recent policy that loosens the requirement on referencing HTML5 is
> a
> step in the right direction, but I believe we would benefit greatly by
> making ti default policy rather than the exception.
> 
> Review process
> --------------
> Most groups who review other groups' specs (e.g. I18N or WAI) wait
> until
> Last Call to make their reviews, since they don't want to review stuff
> that is likely to change a lot.
> 
> But in practice, many of these reviews will focus on a subset of the
> spec rather than its entirety: I18N will focus on features that have an
> impact on internationalization, WAI on the ones that have an impact on
> accessibility.
> 
> Some of these features have a stabilized design much earlier than the
> rest of the spec; but it won't be reviewed until the entirety of the
> spec stabilizes.
> 
> Testing
> -------
> Similarly, most groups wait until CR (in the best cases they'll start
> at
> LC) to develop their test suites; but again, many of the relevant
> features have been stable for a very long time when that work starts.
> In
> other words, it could have started much earlier if we had identified
> that stability.
> 
> Process for handling changes
> ---------
> The way the process is currently applied implies that each time a
> post-LC spec is substantially changed (which I think is interpreted as
> affecting how an implementation would work), it needs to go back to LC.
> 
> In most cases, the change only affect a couple of features of the spec;
> but the whole spec goes back to LC, and the whole specs can get new
> comments that need handling/reviewing, etc.
> 
> Also, some of these changes are deemed substantial, but aren't in
> practice: I think a better definition of substantivity would relate to
> whether it impacts *interoperability* rather than conformance. In other
> words, if a spec changes to reflect what most implementations are
> already doing (and if the group agrees this is a reasonable thing to
> do), that given change doesn't reduce but instead increases
> interoperability, and as such shouldn't need to be "punished" by going
> back to LC.
> 
> Stability enforcement
> -------
> One of the strongest policy we enforce on our specs is the stability of
> their dated version; the reasoning for it is to facilitate reviews,
> implementation schedules, etc.
> 
> In practice, these freezes could be a lot less drastic if we approach
> it
> as a feature level: implementers/reviewers could ask for a specific
> feature to be under freeze (while they implement/review), without
> requiring the whole document where that feature is written up to be
> frozen.
> 
> RF commitments
> --------------
> Because our process focuses on specs that bundles features of very
> different level of deployment, some very well-deployed features don't
> get RF protection until much after their deployment. This creates big
> risks (for implementers, but also for developers).
> 
> What's a feature
> -----------------
> Now the $1B question is: what's a feature? I believe the answer is
> again
> to be driven by our goal: a feature is something that several
> implementors are working on releasing (or have released).
> 
> Obviously that's not a 100% waterproof solution, but it's probably in
> many cases a good guide for a group to determine what "features" it is
> working on. And in most cases, a spec is defining a lot of features
> (that vendors will in most cases deploy at difference paces).
> 
> Should we align specs and features?
> ---------------
> In some cases, we might; I believe the good progress of the Web
> Performance WG is bound to the fact that they have been able to do
> that.
> 
> But it's probably not always possible to do it; also, the current
> overhead of publishing a spec probably doesn't make it a very tempting
> option (for now at least).
> 
> This mail is long enough; I'll put in a later message (maybe tomorrow)
> some of the ideas that I have to help bridge that chasm.
> 
> Dom
> 
> 

Received on Tuesday, 20 March 2012 08:16:05 UTC