Re: Spec organizations and prioritization

Le jeudi 22 mars 2012 à 12:51 +0000, Marcos Caceres a écrit :
> I agree with Anne here. You are assuming that modularisation solves
> the problem in the way CSS is doing it. It is also possible to
> "modularise" sections of a spec for stability (as the WHATWG  HTML
> spec does), without making separate specs. Some specs, like HTML5,
> make sense as a monolithic spec: breaking it up into lots of "modules"
> would just be "make work".

I think this characterization of "make work" only holds if it doesn't
get us anything; if it gets us something (e.g. higher RF protection),
then it's not mark work.

Modularization for the sake of it is make work; modularization built
around fast tracking features that are already deployed and
interoperable seems to be a good investment (even if costly).

> As a point of comparison: look at the interop in HTML5 feature sets
> and the size of the HTML5 spec relative to other "modules" and that
> might give you an indication of speed and progress… HTML5, for its
> size and scope, has move at a pretty amazing speed by anyone's measure
> (and retained extremely high quality).   

HTML5 is pretty amazing by many measures; but the actual interop among
the various features that HTML5 defines varies a lot: it's great for
some, passable for others, and inexistent for others. The modularization
that I think would make sense for HTML5 would be the one that focuses on
the first set.

> > > Enforcing more rules on limited resources is a sure way to make them  
> > > go away. (Unless you lead by example and demonstrate the effectiveness,
> >  
> > I was suggesting that the CSS group was providing the example. Surely  
> > you don't want me to create a new group just for the purpose of setting  
> > an example.
> 
> I don't know if they are or not. Work in CSS still progresses at
> generally normal pace (standardisation takes around 5 years on
> average, no?). W3C actually has all that data (how long from it takes
> from FPWD to Rec… would be good to get proper stats about how long on
> average the process takes and compare across groups).     

I started to look into this the other day, and can share the CSV data
that I extracted in that early work; but I stopped when it was obvious
that I didn't have a specific theory to test for the existing data set,
or not the right data for the theories I wanted to test.

My theory is: specs that stick close to implementation deployment can
fly through the Rec track. But I don't think that anyone has been
tracking that particular characteristic.

Dom

Received on Thursday, 22 March 2012 13:31:55 UTC