- From: Peter Kasting <pkasting@google.com>
- Date: Wed, 30 Jul 2008 01:27:47 -0700
On Tue, Jul 29, 2008 at 5:10 PM, Russell Leggett <russell.leggett at gmail.com>wrote: > That is a performance killer. > > > I don't think it is as much of a performance killer as you say it is. > Correct me if I'm wrong, but the standard connection limit is two. > The standard connection limit is 6, not 2, as of IE 8 and Fx 3. I would be very surprised if this came back down or was not adopted by all other browser makers over the next year or two. Furthermore, the connection limit applies only to resources off one host. Sites have for years gotten around this by sharding across hosts ( img1.foo.com, img2.foo.com, ...). There are many reasons resources can cause slowdown on the web, but I don't view this "archive" proposal as useful in solving them compared to existing tactics. Server sharding and higher connection limits solve the problem of artificially low connection limits. JS script references block further parsing in most browsers; the correct solution to this, as Ian said, seems like some variant of Safari's optimistic parser. Referencing large numbers of tiny images causes excessive image header bytes + TCP connection overhead that can be reduced or eliminated with CSS spriting. The only thing archives get you IMO is difficulty with caching algorithms, annoyances rewriting URLs, potentially blocked parsing, and possibly inefficient use of network bandwidth due to reduced parallelization. Archives remove the flexibility of a network stack to optimize parallelization levels for the user's current connection type (not that I think today's browsers actually do such a thing, at least not well; but it is an area with potential gains). PK -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080730/15601e7a/attachment.htm>
Received on Wednesday, 30 July 2008 01:27:47 UTC