RE: Performance implications of Bundling and Minification on HTTP/1.1

Thanks! Actually, bundling, minification (and moving those links to the top of the HTML) together with compression are fairly straight forward mechanisms with relatively big bang for the buck. I don¡¯t claim that they solve all problems ¨C they clearly don¡¯t ¨C but for the simple case of downloading content to render a web page they can have a big impact.

Saying that you can magically fix performance in the underlying protocol without taking into account the content I think is a hard argument to make. For example, if the links to CSS or JS is several kBytes down in the HTML and is required to render the page then it simply takes time to get to that data. It would be interesting to see exactly that the impact would be using interleaving or multiplexing so that we can compare data.

SPDY¡¯s trick to get around intercepting proxies is to use SSL but that can be done with HTTP just as well. The real question to ask is why these intercepting proxies are there in the first place. My sense (although I don¡¯t claim to know all the nuances) is that when you are deploying in a public site you simply can¡¯t get set up unless you use HTTP and it can¡¯t be SSL. Public internet providers really want you to sign up and click some buttons and the only way to do that today is to sniff the traffic. That is a real problem that I think needs solving but it has little to do with the application protocol.

I don¡¯t have data for SSL but that is something we could do.

Henrik

From: Roberto Peon [mailto:grmocg@gmail.com]
Sent: Friday, June 22, 2012 1:09 PM
To: Henrik Frystyk Nielsen
Cc: William Chan (³ÂÖDzý); HTTP Working Group; Howard Dierking
Subject: Re: Performance implications of Bundling and Minification on HTTP/1.1

Nice data!
On Fri, Jun 22, 2012 at 12:05 PM, Henrik Frystyk Nielsen <henrikn@microsoft.com<mailto:henrikn@microsoft.com>> wrote:
The main point is to establish a baseline for how well an optimized HTTP/1.1 implementation with optimized content can perform. Given such a baseline it is much easier to compare HTTP/2.0 ideas and evaluate the positives and/or negatives of alternative solutions. As such I am not making any normative claims per se but there indeed are two points that I think are important to observe:


1)      It seems safe to assert that whatever we come up with in HTTP/2.0 should be significantly faster than what can be achieved with an optimized HTTP/1.1 implementation. I don¡¯t think speed is the only reason for considering HTTP/2.0 but it is an important one and so we have to be able to compare numbers.


I'd argue another point.
The amount of work necessary to optimize site performance for HTTP/1.1 today is large. The amount of knowledge necessary to do it properly is also large.
This is not the way it should be!

The protocol should make it easier to do things right, and it should help in the (extremely frequent and likely) case that the site designer gets it wrong in little ways.



2)      Without taking a broader view of performance that includes the content as an integral part you simply cannot meaningfully expect to get a fast system. In other words, there is no way that an application protocol can compensate for badly composed content. For example, if you put your links to CSS and JS deep down in your HTML or rely on 100 requests to be complete in order to render a page then no amount of protocol optimizations can effectively help you. Mechanisms such as bundling, minification, and compression can play a significant role here.
"...no amount of protocol optimizations can effectively help you."
Note that this isn't strictly true :)
Feeding back into the statement I made above, as an example: response reordering/interleaving, etc. can go pretty far in mitigating the damage done by suboptimal link ordering. It can't help in some pathological cases, true, but I'd guess that most sites don't present the pathological worst case...
A protocol which allows for bundling without requiring the site developer to deal with it increases the chance that a site's latency approaches closer to optimal.


As to pipelining, the reality is that being able to send requests and responses without delay is a must in any environment with noticeable RTTs. Whether it happens using HTTP/1.1 pipelining or via some other mechanism can be discussed but there is no way that we can get better performance without it. I do understand that there are limitations in how well it is deployed but I am dubious that just deploying something different inherently will solve that unless we know the root cause.

But don't we already understand the root cause? Isn't it intercepting proxies?
Basically, that a number of proxies considered parts of the spec optional, and took shortcuts, and are deployed in such a fashion as to intercept and disrupt communications between parties that would otherwise be able to communicate without problem?

Deploying over a channel that does not have these intercepting proxies solves this problem. We know that we don't have this problem to any degree that matters for SPDY, for instance.
Is it worth generating a separate table for doing the various optimizations over HTTPS?
-=R



Thanks,

Henrik

From: willchan@google.com<mailto:willchan@google.com> [mailto:willchan@google.com<mailto:willchan@google.com>] On Behalf Of William Chan (???)
Sent: Friday, June 22, 2012 11:30 AM
To: Henrik Frystyk Nielsen
Cc: HTTP Working Group; Howard Dierking
Subject: Re: Performance implications of Bundling and Minification on HTTP/1.1

Thanks for posting data here! Very much appreciated. I'm curious if you have any normative claims to make about how this should impact HTTP/2.0 proposals. I can see arguments for how some of these techniques rightly belong in the application layer, whereas some are working around issues in HTTP, and we may want to address in HTTP/2.0. Oh, and I'm also curious about your thoughts with regard to pipelining, since you brought it up in this post and have noted that it has practical deployment issues.

On Fri, Jun 22, 2012 at 10:18 AM, Henrik Frystyk Nielsen <henrikn@microsoft.com<mailto:henrikn@microsoft.com>> wrote:
We just published a blog [1] analyzing the performance implications of content optimizations such as bundling and minification on the performance of web pages. The data shows that by applying bundling and minification along with compression and pipelining it is possible to get significant gains in the time it takes to get the content necessary to render a page as well as the overall time it takes to download the data.

Not only does optimizing the content save bytes but it also has savings in the number of requests and responses that need to be processed as well as faster render times due to being able to retrieve the HTML, CSS, and JS up front. In the test evaluated, the speedup was from 638 ms (uncompressed, unbundled, unminified, and not pipelined) down to 146 ms for the equivalent compressed, bundled, minified, and pipelined content. However, by just looking at the data necessary to lay out the page (HTML, CSS, and JS but not images), the time went from 631 ms to 126 ms with the images being finalized within the remaining timespan from 126 to 146 ms.

It is the hope that this data can contribute to providing a baseline for evaluating HTTP/2.0 proposals compared to how an efficient HTTP/1.x implementation can perform while leverage optimizations throughout the stack to provide better user experience.

Comments welcome!

Thanks,

Henrik

[1] http://blogs.msdn.com/b/henrikn/archive/2012/06/17/performance-implications-of-bundling-and-minification-on-http.aspx

Received on Friday, 22 June 2012 22:02:17 UTC