Re: Performance implications of Bundling and Minification on HTTP/1.1

In message <op.wgbxldx3iw9drz@manganese.bredbandsbolaget.se>, "Martin Nilsson" 
writes:

>Also, some HTTP requests are rewritten by proxies and anti-virus  
>applications to disable compression, so compression will be used even less.

... and they have a good reason to disable gzip:  These devices sit at the
"choke-points" in the network and see very high if not the highest
HTTP-traffic densities of any devices in the HTTP domain.

There are two subcases, and they are quite different:

"Incoming"
----------

Typically a load-balancer which needs only to inspect the "Host:"
header and/or the URI in the request and the status code of the
reponse.

These are the devices I call "HTTP routers", and they are where
all the traffic bottlenecks when the entire world tries to find
out what happened in Dallas.

HTTP/2.0 should serialize (at least) these crucial fields without
gzip and preferably in a way that makes it very easy and cheap to
find them.


"Outgoing"
----------

Almost always content-scanning, and since there are legitimate
use cases (Prison inmates for instance) we have to accept this
role as legitimate[1].

A legitimate argument exists, that censors should pay the cost
of censorship.  If we accept that, these boxes should not be
able to force clients/servers to forego compression.


-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Saturday, 23 June 2012 08:30:13 UTC