W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Rechartering HTTPbis

From: Willy Tarreau <w@1wt.eu>
Date: Thu, 26 Jan 2012 11:30:13 +0100
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc: Amos Jeffries <squid3@treenet.co.nz>, ietf-http-wg@w3.org
Message-ID: <20120126103013.GC8887@1wt.eu>
On Thu, Jan 26, 2012 at 09:48:06AM +0000, Poul-Henning Kamp wrote:
> In message <20120126093557.GA8887@1wt.eu>, Willy Tarreau writes:
> >On Thu, Jan 26, 2012 at 08:39:00AM +0000, Poul-Henning Kamp wrote:
> >I find it pretty cumbersome to force everyone to support zlib, especially
> >in environments where it provides no benefit (small requests/responses)
> Actually if you look at it, you can simulate ZLIB in null-mode trivially,
> so that is not really a valid concern.

That doesn't change the fact that the recipients must decompress what
they receive to understand what it talks about if the sender is using
compression in a non-null mode.

> It's an idea, I'm not sure if it is a really good idea, but the current
> handling of compression is worse.
> Compressing cookies would save a lot of bytes, but those are content-metadata
> so maybe there are less draconian means.

Not everywhere. From what I observe at various places, what takes a lot
of space in requests is :
  - user agent : no need to compress it, just specify a new non-abusive format
  - cookies : the largest ones are those handling an encrypted context so you
    won't compress them

And yes, there are also a number of small requests which will not benefit
from compression.

> One benefit of compressing the entire connection is that it offers
> "privacy-light", the simple malware which just snoops packets and
> searches for "password:" etc, would be out of the picture.

I disagree, it will only make debugging harder, but malware are already
much smarter than humans when it comes to process data. And if most malware
see and manipulate HTTPS contents, it's precisely because they're installed
where the hard work has already been done (ie: in the browser).

> >Making trailers mandatory will cause a lot of pain to static servers
> >relying on sendfile() and equivalent mechanisms.
> Nobody forces them to send any trailers...
> I'm perfectly happy with announcing the intention to send trailers
> in the headers.  (Sort of like postscripts "atend")


> >Really, I think that
> >current chunking already offers provisions for reporting issues, and
> >that trying to improve the minority of unrecoverable error situations
> >will cost a lot for many components in the normal case.
> No, chunking offers no way to report anything, all you can do is
> close the connection.

You can report information on chunk boundaries. But in the middle of
a chunk, I agree you need to close the connection, but it is not specific
to chunking but is general to any variable length protocol.

> >There's no one-size-fits-all. Having small chunks (eg: 256 bytes) will
> >cause a huge overhead for very small exchanges (eg: auto-completion),
> >while at the same time significantly reduce on-wire efficiency and CPU
> >efficiency. Chunking works remarkably well for all sizes nowadays, let's
> >not reinvent something which works well.
> If you made the chunked header look like:
> 	\nXXXXXX\n
> Instead of the current
> 	\r\nX*\r\n

Ah OK, I thought you were talking about the size of each chunk. Yes,
a fixed length for the advertised chunk size would be much better. It
too annoys me to have to accept infinitely long series of leading

> Then the difference in efficiency is utterly marginal for transmitters
> but very big for receivers.

100% agree, especially if we get rid of the \r !

Received on Thursday, 26 January 2012 10:31:07 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC