W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

Re: [Fwd: I-D ACTION:draft-dusseault-http-patch-09.txt]

From: Travis Snoozy <ai2097@users.sourceforge.net>
Date: Fri, 7 Sep 2007 12:19:09 -0700
To: Jamie Lokier <jamie@shareable.org>
Cc: Yaron Goland <yarong@microsoft.com>, Henrik Nordstrom <henrik@henriknordstrom.net>, James M Snell <jasnell@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20070907121909.624d4373@localhost>

On Fri, 7 Sep 2007 19:35:48 +0100, Jamie Lokier <jamie@shareable.org>
wrote:
<snip>
> > whether it implements the semantics of the new conditions, and if it
> > supports them correctly. It's entirely possible for a server with
> > crummy pipelining to implement new If-* header support either with
> > buginess matching their pipelining implementation, or (depending on
> > the If-*) correctly for non-pipelined requests, but without changing
> > the pipelining code to do the "right thing" in chained scenarios
> > (out-of-order responses, anyone?).
> 
> Out of order responses are an improvment, but they are just another
> level of hack.  
<snip>

I prefer to call OOO responses bugs. The implication was not supposed to
be "hey, I chained these, so you can build a dep-tree and process OOO,"
it was supposed to be "Hey, I chained these, and the stupid server went
and processed them in the wrong order, totally ignoring the deps, and
it screwed everything up" -- like servers in the wild -already- do.

> I agree totally with your point, that _merely_ adding new headers
> isn't enough to make reliable pipelining of any kind from general
> clients to existing servers in the wild.

Indeed --

> That's no reason to assume it's impossible to do.  Only that it would
> have to be more complicated than simply adding a header, if it's
> possible at all.
<snip>

Part of the complete, broader solution would have to involve fixing the
whole RFC so it's not only readable, but actually implementable (it's
neither now). I argue that a certification test suite (at bare minimum)
and a free-to-steal reference implementation are critical as well, to
make any such rewrite successful. There's absolutely no good QA process
for HTTP clients/servers right now, and the "robustness principle" has
given license for all kinds of trash implementations.


> If we have the high level interest, we can look at ways to implement
> it them on the wire and try to find something which is backward
> compatible, as a real world version even if it has to be ugly, and an
> ideal version, perhaps to be deployed in 2022 by phasing out the
> hacks.

Variations and backward-compatible hacks are part of what's turned HTTP
into the crufty monster that it is today. I really don't like the
notion of layering more hacks upon cruft upon incompatibilities upon
heuristics. Yeah, we can make it fit if we bring out a hack saw and
make the round hole square -- but adding more complexity just means
there are more things to misunderstand, mis-implement, and otherwise go
wrong.


-- 
Travis
Received on Friday, 7 September 2007 19:19:18 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:15 GMT