W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Performance implications of Bundling and Minification on HTTP/1.1

From: Roberto Peon <grmocg@gmail.com>
Date: Fri, 22 Jun 2012 13:08:45 -0700
Message-ID: <CAP+FsNfghvr8MVSw4MTo10gHhvV5ZiTwvnCCzPrCzX_g4Zkwng@mail.gmail.com>
To: Henrik Frystyk Nielsen <henrikn@microsoft.com>
Cc: (wrong string) 陈智昌) <willchan@chromium.org>, HTTP Working Group <ietf-http-wg@w3.org>, Howard Dierking <howard@microsoft.com>
Nice data!

On Fri, Jun 22, 2012 at 12:05 PM, Henrik Frystyk Nielsen <
henrikn@microsoft.com> wrote:

>  The main point is to establish a baseline for how well an optimized
> HTTP/1.1 implementation with optimized content can perform. Given such a
> baseline it is much easier to compare HTTP/2.0 ideas and evaluate the
> positives and/or negatives of alternative solutions. As such I am not
> making any normative claims per se but there indeed are two points that I
> think are important to observe:****
>
> ** **
>
> **1)      **It seems safe to assert that whatever we come up with in
> HTTP/2.0 should be significantly faster than what can be achieved with an
> optimized HTTP/1.1 implementation. I dont think speed is the only reason
> for considering HTTP/2.0 but it is an important one and so we have to be
> able to compare numbers.****
>
> **
>
I'd argue another point.
The amount of work necessary to optimize site performance for HTTP/1.1
today is large. The amount of knowledge necessary to do it properly is also
large.
This is not the way it should be!

The protocol should make it easier to do things right, and it should help
in the (extremely frequent and likely) case that the site designer gets it
wrong in little ways.



> **
>
> **2)      **Without taking a broader view of performance that includes
> the content as an integral part you simply cannot meaningfully expect to
> get a fast system. In other words, there is no way that an application
> protocol can compensate for badly composed content. For example, if you put
> your links to CSS and JS deep down in your HTML or rely on 100 requests to
> be complete in order to render a page then no amount of protocol
> optimizations can effectively help you. Mechanisms such as bundling,
> minification, and compression can play a significant role here.
>
"...no amount of protocol optimizations can effectively help you."
Note that this isn't strictly true :)
Feeding back into the statement I made above, as an example: response
reordering/interleaving, etc. can go pretty far in mitigating the damage
done by suboptimal link ordering. It can't help in some pathological cases,
true, but I'd guess that most sites don't present the pathological worst
case...
A protocol which allows for bundling without requiring the site developer
to deal with it increases the chance that a site's latency approaches
closer to optimal.


> ****
>
> ** **
>
> As to pipelining, the reality is that being able to send requests and
> responses without delay is a must in any environment with noticeable RTTs.
> Whether it happens using HTTP/1.1 pipelining or via some other mechanism
> can be discussed but there is no way that we can get better performance
> without it. I do understand that there are limitations in how well it is
> deployed but I am dubious that just deploying something different
> inherently will solve that unless we know the root cause.
>

But don't we already understand the root cause? Isn't it intercepting
proxies?
Basically, that a number of proxies considered parts of the spec optional,
and took shortcuts, and are deployed in such a fashion as to intercept and
disrupt communications between parties that would otherwise be able to
communicate without problem?

Deploying over a channel that does not have these intercepting proxies
solves this problem. We know that we don't have this problem to any degree
that matters for SPDY, for instance.
Is it worth generating a separate table for doing the various optimizations
over HTTPS?
-=R



> ****
>
> ** **
>
> Thanks,****
>
> ** **
>
> Henrik****
>
> ** **
>
> *From:* willchan@google.com [mailto:willchan@google.com] *On Behalf Of *William
> Chan (???)
> *Sent:* Friday, June 22, 2012 11:30 AM
> *To:* Henrik Frystyk Nielsen
> *Cc:* HTTP Working Group; Howard Dierking
> *Subject:* Re: Performance implications of Bundling and Minification on
> HTTP/1.1****
>
> ** **
>
> Thanks for posting data here! Very much appreciated. I'm curious if you
> have any normative claims to make about how this should impact HTTP/2.0
> proposals. I can see arguments for how some of these techniques rightly
> belong in the application layer, whereas some are working around issues in
> HTTP, and we may want to address in HTTP/2.0. Oh, and I'm also curious
> about your thoughts with regard to pipelining, since you brought it up in
> this post and have noted that it has practical deployment issues.****
>
> ** **
>
> On Fri, Jun 22, 2012 at 10:18 AM, Henrik Frystyk Nielsen <
> henrikn@microsoft.com> wrote:****
>
> We just published a blog [1] analyzing the performance implications of
> content optimizations such as bundling and minification on the performance
> of web pages. The data shows that by applying bundling and minification
> along with compression and pipelining it is possible to get significant
> gains in the time it takes to get the content necessary to render a page as
> well as the overall time it takes to download the data.****
>
>  ****
>
> Not only does optimizing the content save bytes but it also has savings in
> the number of requests and responses that need to be processed as well as
> faster render times due to being able to retrieve the HTML, CSS, and JS up
> front. In the test evaluated, the speedup was from 638 ms (uncompressed,
> unbundled, unminified, and not pipelined) down to 146 ms for the equivalent
> compressed, bundled, minified, and pipelined content. However, by just
> looking at the data necessary to lay out the page (HTML, CSS, and JS but
> not images), the time went from 631 ms to 126 ms with the images being
> finalized within the remaining timespan from 126 to 146 ms.****
>
>  ****
>
> It is the hope that this data can contribute to providing a baseline for
> evaluating HTTP/2.0 proposals compared to how an efficient HTTP/1.x
> implementation can perform while leverage optimizations throughout the
> stack to provide better user experience.****
>
>  ****
>
> Comments welcome!****
>
>  ****
>
> Thanks,****
>
>  ****
>
> Henrik****
>
>  ****
>
> [1]
> http://blogs.msdn.com/b/henrikn/archive/2012/06/17/performance-implications-of-bundling-and-minification-on-http.aspx
> ****
>
>  ****
>
> ** **
>
Received on Friday, 22 June 2012 20:09:17 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 22 June 2012 20:09:30 GMT