W3C home > Mailing lists > Public > public-bpwg@w3.org > October 2008

RE: Support for compression in XHR?

From: Dominique Hazael-Massieux <dom@w3.org>
Date: Tue, 07 Oct 2008 15:58:42 +0200
To: "Sullivan, Bryan" <BS3131@att.com>
Cc: David Storey <dstorey@opera.com>, public-bpwg <public-bpwg@w3.org>
Message-Id: <1223387922.6712.124.camel@localhost>

Le lundi 22 septembre 2008 à 22:32 -0700, Sullivan, Bryan a écrit :
> One more point on this thread. We in most cases do see an advantage in
> compressing HTML and XHTML web pages using GZIP/deflate in our network
> proxies, and since the compression is done on a per HTTP packet basis,
> the browser does not have to wait to get the whole page before
> uncompressing (the browser has to uncompress each packet individually
> anyway, since they are compressed as discrete transfer units).
> Only if the web server compressed the content itself, as a whole
> document, and then sent it over multiple HTTP CONTINUATION packets,
> would the browser need to get the whole page before uncompressing.
> But that is not normal behavior of web servers that we see in our
> network.

I'm still unclear whether you're saying that most Web servers (e.g.
Apache) send compressed pages by small enough packets that they won't
prevent progressive rendering, or if that's "only" a feature of the
specific network proxy set up on your network.

Looking at Apache (as an example), it seems the default size of the
fragment compressed by mod_deflate is 8KB:
which is probably sensible to enable progressive rendering, but probably
doesn't take into account the MTU of a packet on a mobile network.

I wonder again if this specific value is not something we could give
advice on; but then, I guess we can only do so if we have some way to
make measures and estimations on this whole question...

Received on Tuesday, 7 October 2008 14:00:05 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:42:59 UTC