Re: HTTP 2.0 and a Faster, more Mobile-friendly web

Henrik,

Current HTTP pipelining works fine when it works, but there are a lot of  
servers that doesn't support it. As soon as you start having more than one  
outstanding request on a connection you'll run into issues ranging from  
ignoring subsequent requests (forcing a reissue of the queued requests),  
to closing the connection as soon as a second request is received  
(truncating the first response), to scrambling the response (i.e. multiple  
responses written to the same socket at the same time). Due to the real  
world complexities and the amount of heuristics needed, browser support  
has been low.

All three HTTP/2 proposals address this by explicitly mandating  
pipelining, and improves it with out-of-order response to address stalling  
of a specific resource. Compared with HTTP/1.1 it would likely make little  
difference, as today resources with similar response times are grouped on  
different hosts.

The biggest increase in header size is, to the best of my knowledge,  
ironically on the mobile side, where the use of UA-prof-diff-headers have  
resulted in several kilobytes of data extra per request on some devices.  
This is again addressed in all the proposals, deflate for SPDY and a  
bucket based delta compression for Network-Friendly. We use a straight  
delta compression (only send headers that changed from previous request)  
in all of Operas own HTTP-replacements. Comparing against HTTP/1.1 should  
give a good improvement in radio time allocation in a low bandwidth, high  
latency environment.

Video, specifically rate limited video, adds the requirement of  
multiplexing the data streams, as an HTTP/1.1 pipeline would stall as soon  
as the video starts being transferred. All three proposals should give  
much better user experience than HTTP/1.1 when a page consists of both  
very large and very small resources (e.g. large background and small  
inline images). In terms of raw numbers, multiplexing (insignificantly)  
increases the amount of data.

As for minification, it only adds confusion. You can take a site, minify  
the resources and pack it all together into a 7zip package, and no one  
will be surprised that downloading it is faster than loading the  
unprocessed site. But it doesn't tell you anything about the protocols  
used to perform the downloads.

/Martin Nilsson

On Mon, 30 Jul 2012 08:32:07 +0200, Henrik Frystyk Nielsen  
<henrikn@microsoft.com> wrote:

> Martin,
>
> What scenarios do you have in mind where pipelining does not work?  
> Obviously it only works for GET so for web sites using lots of non-GET  
> requests it won't work but GET is the predominant method on most sites.
>
> We took some pretty normal sites so I do think they represent reasonable  
> data. We'd be happy to expand the sites but the whole point was to show  
> differences in relatively common sites. It is correct that we didn't use  
> cookies. Cookie and UA-header sizes can indeed vary a lot but that is a  
> much more isolated problem to solve that can be done in any number of  
> ways.
>
> As for video (assuming you use TCP) the bottleneck is likely going to be  
> set by TCP throughput. The overhead of HTTP headers will be negligible  
> due to the size of the payload. We'd be happy to do some tests but all  
> things being equal I can say based on experience that adding a  
> credit-based session layer such as SPDY will not perform as well as  
> running straight over TCP for large payloads. The reason is that the  
> credit scheme by its very purpose acts as a throttle so that one session  
> doesn't take the entire bandwidth.
>
> As for minification, the intention is not to do those unbeknownst to the  
> website designer -- it is something that is an integral part of the  
> design of a web site so there is no reason why this isn't a valid  
> optimization.
>
> Thanks,
>
> Henrik
>


-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/

Received on Monday, 30 July 2012 16:36:56 UTC