W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

RE: Performance implications of Bundling and Minification on HTTP/1.1

From: Henrik Frystyk Nielsen <henrikn@microsoft.com>
Date: Fri, 22 Jun 2012 21:55:25 +0000
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
CC: William Chan (Dz) <willchan@chromium.org>, HTTP Working Group <ietf-http-wg@w3.org>, Howard Dierking <howard@microsoft.com>
Message-ID: <3605BA99C081B54EA9B65B3E33316AF7346D2896@CH1PRD0310MB392.namprd03.prod.outlook.com>
These are all great questions. This is exactly why we tried to get some numbers so that we can at least have something to agree/disagree over so that we can start to formulate goals for what faster means. It is also clear that performance gain for one individual cannot come at the expense of performance of other individuals -- we all have to play nice with a shared resource. One of the points were indeed that you *can* get higher performance with HTTP/1.1 while using significantly less resources in terms of bytes and connections needed to render a page.

I am not sure what you are referring to with "default pipeline/parallism windows of 6" -- I used two persistent TCP connections which I think is on the low end of what most browsers do today. 


-----Original Message-----
From: Poul-Henning Kamp [mailto:phk@phk.freebsd.dk] 
Sent: Friday, June 22, 2012 2:45 PM
To: Henrik Frystyk Nielsen
Cc: William Chan (Dz); HTTP Working Group; Howard Dierking
Subject: Re: Performance implications of Bundling and Minification on HTTP/1.1

In message <3605BA99C081B54EA9B65B3E33316AF7346C9CBD@CH1PRD0310MB392.namprd03.p
rod.outlook.com>, Henrik Frystyk Nielsen writes:

Henrik, thanks for some very interesting data.

> 1)      It seems safe to assert that whatever we come up with in
> HTTP/2.0 should be significantly faster than what can be achieved with 
> an optimized HTTP/1.1

I expect we can all agree on this, the fight will break out at 9pm, when we try to define the meaning of "faster", "significantly" etc :-)

"faster" is not as unambiguous as most people think.

We have all seen the CNN exponential traffic plots from that day[1] and if anything we must expect that such "event driven peaks" will be even more summitous in the future.

In the generalized "#neilwebfail" scenario, "faster" for the individual client means "when does something appear on my screen?"
whereas "faster" for the server means "how soon can I get something to appear on many/most screens".

Addressing this in HTTP/2.0 with "Tough luck, buy more bandwidth & servers", is not good enough.

Content-level design has a lot to do with this, as CNN showed with their static HTML.

If I understand your data right, what you have done is server-neutral, but HTTP/2.0 ideas like "default pipeline/parallism windows of 6"
will not be server neutral, and are almost guaranteed to make any load problem at least six times worse for servers.

So, yes, I agree: HTTP/2.0 must be faster than HTTP1.1, but we need to talk about "faster for who ?" and "faster how ?"


[1] Deliberate "avoid using T-word" obfuscation.

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Friday, 22 June 2012 21:57:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:00 UTC