Re: Performance implications of Bundling and Minification on HTTP/1.1

In message <3605BA99C081B54EA9B65B3E33316AF7346C9CBD@CH1PRD0310MB392.namprd03.p
rod.outlook.com>, Henrik Frystyk Nielsen writes:

Henrik, thanks for some very interesting data.

> 1)      It seems safe to assert that whatever we come up with in
> HTTP/2.0 should be significantly faster than what can be achieved
> with an optimized HTTP/1.1

I expect we can all agree on this, the fight will break out at 9pm,
when we try to define the meaning of "faster", "significantly" etc :-)

"faster" is not as unambiguous as most people think.

We have all seen the CNN exponential traffic plots from that day[1]
and if anything we must expect that such "event driven peaks" will
be even more summitous in the future.

In the generalized "#neilwebfail" scenario, "faster" for the
individual client means "when does something appear on my screen?"
whereas "faster" for the server means "how soon can I get something
to appear on many/most screens".

Addressing this in HTTP/2.0 with "Tough luck, buy more bandwidth &
servers", is not good enough.

Content-level design has a lot to do with this, as CNN showed with
their static HTML.

If I understand your data right, what you have done is server-neutral,
but HTTP/2.0 ideas like "default pipeline/parallism windows of 6"
will not be server neutral, and are almost guaranteed to make any
load problem at least six times worse for servers.

So, yes, I agree: HTTP/2.0 must be faster than HTTP1.1, but we need
to talk about "faster for who ?" and "faster how ?"

Poul-Henning

[1] Deliberate "avoid using T-word" obfuscation.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Friday, 22 June 2012 21:45:49 UTC