W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Performance implications of Bundling and Minification on HTTP/1.1

From: Willy Tarreau <w@1wt.eu>
Date: Sat, 23 Jun 2012 10:01:47 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: Mike Belshe <mike@belshe.com>, "Roy T. Fielding" <fielding@gbiv.com>, Henrik Frystyk Nielsen <henrikn@microsoft.com>, HTTP Working Group <ietf-http-wg@w3.org>, Howard Dierking <howard@microsoft.com>
Message-ID: <20120623080147.GC18996@1wt.eu>
On Sat, Jun 23, 2012 at 11:35:09AM +1000, Mark Nottingham wrote:
> > The point is that the #1 performance impact in this test was compression, which optional in HTTP, and that optional features are not used as widely as they could be.
> It's actually very widely deployed among browsers.
> The issue isn't that it's optional, the issue is that because it's negotiated, some intermediaries disable the negotiation to make application of policy easier. 
> We can have a discussion about whether or not we want to prioritise performance over these use cases, but let's not kid ourselves that it's as simple as "optional = bad." Sometimes there are good reasons to make something optional.

Indeed! I'd prefer having my traffic pass through a non-compressing proxy
than through a proxy which has trouble compressing chunked-encoded data.
And I'm still regularly observing some of them lying around from time to
time. It's not just a matter of applying a policy easier, it's also a
matter of getting the code do the right thing and reliably.

The most features we'll make mandatory, the less reliable components we'll

Also, while I'm not fundamentally against having servers compress data by
default, I'm against having them do it *systematically* and against using
gzip *only*. Gzip is now old and new algos are much more powerful at the
expense of CPU power on the compressing size. But caching compressed
objects sometimes allows a server to prepare contents that are delivered
much faster to capable consumers. That's why large files are commonly
delivered in tar.xz archives nowadays instead of tar.gz.

I think that the main reason for servers not always compressing contents
right now is that the underlying HTTP/1 protocol doesn't make this easy.
In my opinion, we should design HTTP/2 so that it doesn't get tricky
anymore to send compressed contents, and that way we'll encourage servers
to deliver compressed contents (whether they do it on the fly or select a
pre-compressed alternate file).

Received on Saturday, 23 June 2012 08:02:22 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:02 UTC