W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Rechartering HTTPbis

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Sat, 28 Jan 2012 00:37:35 +0000
To: Mark Nottingham <mnot@mnot.net>
cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <7305.1327711055@critter.freebsd.dk>
In message <61A10D4D-53CE-473C-AD2A-DC4C0A508B94@mnot.net>, Mark Nottingham wri
tes:

>>    2.  That ID SHALL be 29 pages or less.

>We're actually not too far from this, discounting the size requirement. 

Please understand that the size-requirement is crucial.

The current trend in "more text is better than less text" in
RFC-writing is killing internet standardization as a concept, because
implementing protocols become an increasingly onerous job.

In general, adding text rather than formal specifications generally
makes standards more prone to misinterpretation rather than less
clear.

>The current plan is to gather proposals and evaluate in a few months; if 
>we can't get consensus, the WG will stop HTTP/2.0 work.

Ok, that was nowhere to be found on your original proposed schedule
which just magically had a first draft in 4 months time.

>BTW, Mike's current proto-draft of SPDY weighs in at 44 pages.

Well, he'll have to edit down then.

(SPDY is an absolutely worthwhile prototype, but as Fred. P.  Brooks
cautioned:  "Always throw the prototype away, you will anyway.")

>"Do it my way, or it's not worth our time?" Really?

Not my way:  Jon Postels, Einstens & Antoine de Saint Exupery's way.

I hate to be nasty about this, but the fact that HTTPbis made
RFC2616s page-count double is a major fiasco in my eyes, and I do
not really consider the result an improvement over RFC2616 in any
significant way.

Yes, things have been clarified, and explained, but the matter
of the fact is that HTTP/1.1 sucks as protocol engineering, and
no amount of lipstick or explanations can change that fact.

We know HTTP/1.X is doing it wrong, and given compatibility
constraints, there's nothing we can do about it, and all the
time spent clarifying it, is just wasted IMO.

But HTTP/1.1 is also an incredibly entrenched protocol, and
that raises the bar significantly for HTTP/2.0.

If any protocol is going to be goldplated as "HTTP/2.0", it had
damn well better be simpler to understand, simpler to implement,
have less weird corner-cases AND be faster than HTTP/1.1 is.

Otherwise it will just be IPv4 vs. IPv6 all over again.

So lets see some lean and simple proposals which get the job done.

If we find one or more of them worthwhile, we'll go over it in the
WG and by the time we're done waving our flags and airing our
rocking-horses, it will have explode by a factor of three.

If we start out at 29 pages, that means we'll end just shy of
100 pages, which is a bit over the top, but livable.

Poul-Henning

PS: Feel free to call me a grumpy old man.  Having been on the long
ride from ITU-T X.xxx recommendations for OSI protocols over
SNMPv1/v2/v3 and IPv6, not to mention various other standardisation
debacles, I feel I have earned my opinions.

PPS: And don't forget that IT-"journalists" already have started
wetting their pants about "The Future of the INTERNET: HTTP/2.0"
and similar rubish.  Just wait until the WG misses its first
deadline:  "INTERNET FUTURE HANGS IN THE BALANCE" etc.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Saturday, 28 January 2012 00:38:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:53 GMT