RE: HTTP 2.0 and a Faster, more Mobile-friendly web

Sorry for the delay -- been traveling from Denmark to Seattle and will go to Vancouver tomorrow morning -- hope to see many of you there!

To be clear, I am not claiming that HTTP pipelining is the end-all-be-all -- it clearly has limitations -- but it does allow us to compare on a more apples-to-apples basis what actually improves performance, what doesn't, and what is in the noise. I think that is necessary to know in order to be able to evaluate whether we are headed in the right direction. I hope we can build on the data presented here and get a better understanding of where we are.

I hope we get some time to discuss some of the issues such as cancellation etc. -- lots of interesting (and tricky) issues!

Henrik

-----Original Message-----
From: patrick mcmanus [mailto:pmcmanus@mozilla.com] 
Sent: Monday, July 30, 2012 08:37
To: ietf-http-wg@w3.org
Subject: Re: HTTP 2.0 and a Faster, more Mobile-friendly web

On 7/29/2012 11:32 PM, Henrik Frystyk Nielsen wrote:
> Martin,
>
> What scenarios do you have in mind where pipelining does not work? Obviously it only works for GET so for web sites using lots of non-GET requests it won't work but GET is the predominant method on most sites.

I'm a believer that we can get more benefit from pipelines than we currently do, and I remain committed to seeing that happen on today's
http/1 web. But the challenges go beyond non-get and mangling intermediaries(*).. so in a pro-pipeline, but eyes-wide-open spirit:

1] you still need parallelism because of priority head of line blocking issues. But now some of the parallel flows are filling the pipe and buffers very well, and its hard to manage the prioritization effectively. Using a priority mux instead of pipeline+parallelism makes that work a bit better (buffering in the network remains a problem - though with fewer connections they should at least not be filled as deeply)

2] even on get - blocking responses are common and they screw up the pipeline. database operations, web service callouts, etc all can create variability in response times, and that creates time-to-utilization gaps

3] http/1 cancellation semantics don't exist other than closing the connection, and pipelines make this worse. This comes up all the time in the form of links clicked on partially rendered pages. (think "next" on a webmail inbox). To process the click you need to decide between waiting for the old one to drain, adding more to its existing pipeline, or making a new connection.. this problem always existed without pipelines but the amount of data in the queue has now gone up.

[*] in my experience the most significant intermediary challenges these days come from AV and firewall software.. actual network devices and servers fair ok with a couple notable exceptions. I don't know that this makes it any easier :)

pipelines are useful things - but I think we can do better than just working around their warts.

>
> We took some pretty normal sites so I do think they represent reasonable data. We'd be happy to expand the sites but the whole point was to show differences in relatively common sites. It is correct that we didn't use cookies. Cookie and UA-header sizes can indeed vary a lot but that is a much more isolated problem to solve that can be done in any number of ways.

cookies and pipeline performance are pretty tightly tied. Pipeline depth is driven by client cwnd and the difference in pipeline depth on a cold TCP session goes from 10 to 2 as cookie sizes go from 0 to 2KB. An
HTTP/2 approach that removes header redundancy (whether that be by some manual table approach like network-friendly, or a automagic gzip approach like spdy) is critical to addressing the fundamental round trip problem.

cheers,
-Patrick

Received on Tuesday, 31 July 2012 03:50:18 UTC