Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

On Wed, Feb 29, 2012 at 11:18 PM, Willy Tarreau <w@1wt.eu> wrote:
> On Wed, Feb 29, 2012 at 03:16:22PM -0800, Mike Belshe wrote:

>> The problem with upgrade is that it costs a round trip of latency.
>
> Not for the first request since the server responds to this request.
> And since in HTTP you need the first request anyway to fetch the page
> to discover the objects you'll have to request next, it's not an issue
> for the first request of the keep-alive connection.

I think you're making the assumption that the page and the objects it
requires are served from the same FQDN, right?  In my experience, it's
common for a page served from example.com to reference static objects
served from example.org.  Thinking about the various reasons people
serve resources from different hostnames today,

- Domain sharding: not needed if HTTP/2.0 allows parallel requests on
one TCP conn
- Cookie-free domains: not needed if HTTP/2.0 provides header compression
- Resources served by third parties: still needed under HTTP/2.0
- Resources served by a CDN: still needed under HTTP/2.0 (even though
parallelization of requests will reduce the number of round trips
needed to fetch n resources, there's still value in reducing the
length of each round trip; and people will continue to use CDNs for
scalable/elastic traffic handling)

Thus I think it's essential for HTTP/2.0 to handle the following use
case efficiently:
- The client has a list of n resources it knows it needs
- Those n resources are all available under the same scheme:host:port
- The resources are independent of each other and can be fetched in any order

Brian

Received on Thursday, 1 March 2012 16:40:12 UTC