W3C home > Mailing lists > Public > whatwg@whatwg.org > July 2008

[whatwg] Application deployment

From: Russell Leggett <russell.leggett@gmail.com>
Date: Wed, 30 Jul 2008 10:25:25 -0400
Message-ID: <680cacd10807300725g50fe1a4fy7514ea95e3d3b582@mail.gmail.com>
>
> The only thing archives get you IMO is difficulty with caching algorithms,
> annoyances rewriting URLs, potentially blocked parsing, and possibly
> inefficient use of network bandwidth due to reduced parallelization.
>

I don't see any reason that parsing would need to be blocked any more than
it already is. No rewriting of URLs would be necessary at all, and I have
already provided suggestions for simple solutions that would prevent
unnecessary blocking.

Server sharding and higher connection limits solve the problem of
> artificially low connection limits.  JS script references block further
> parsing in most browsers; the correct solution to this, as Ian said, seems
> like some variant of Safari's optimistic parser.  Referencing large numbers
> of tiny images causes excessive image header bytes + TCP connection overhead
> that can be reduced or eliminated with CSS spriting.


Server sharding and CSS sprites are both artificial solutions that are used
to deal with limitations of the existing deployment model. If you are
worried about fragility, look no further than css sprites. They have to be
background images, and require precise measurement of size and location.
This creates extremely tight coupling between the css code and the file
itself. Not to mention the maintenance of the sprite images themselves.
Clearly we are already dealing with the problems of resource loading and how
to make it most efficient. Our existing solutions are widely varied and
complex, but all of them result in changes to our html/css/js code that
would not already be there if we did not have that limitation.

It seems to me that many of the additions to the HTML spec are there because
they provide a standard way to do something we are already doing with a hack
or more complicated means. CSS sprites are clearly a hack. Concatenating js
files are clearly a hack. Serving from multiple sub-domains to beat the
connection limit is also a workaround. My proposal is intended to approach
the deployment issue directly, because I think it is a limitation in the
html spec itself and therefore, I think the html spec should provide its own
solution. My proposal may not be the best way, but assuming the issue will
be dealt with eventually by some other party through some other means does
not seem right either.

-Russ


On Wed, Jul 30, 2008 at 4:27 AM, Peter Kasting <pkasting at google.com> wrote:

> On Tue, Jul 29, 2008 at 5:10 PM, Russell Leggett <
> russell.leggett at gmail.com> wrote:
>
>>  That is a performance killer.
>>
>>
>> I don't think it is as much of a performance killer as you say it is.
>> Correct me if I'm wrong, but the standard connection limit is two.
>>
>
> The standard connection limit is 6, not 2, as of IE 8 and Fx 3.  I would be
> very surprised if this came back down or was not adopted by all other
> browser makers over the next year or two.
>
> Furthermore, the connection limit applies only to resources off one host.
>  Sites have for years gotten around this by sharding across hosts (
> img1.foo.com, img2.foo.com, ...).
>
> There are many reasons resources can cause slowdown on the web, but I don't
> view this "archive" proposal as useful in solving them compared to existing
> tactics.  Server sharding and higher connection limits solve the problem of
> artificially low connection limits.  JS script references block further
> parsing in most browsers; the correct solution to this, as Ian said, seems
> like some variant of Safari's optimistic parser.  Referencing large numbers
> of tiny images causes excessive image header bytes + TCP connection overhead
> that can be reduced or eliminated with CSS spriting.
>
> The only thing archives get you IMO is difficulty with caching algorithms,
> annoyances rewriting URLs, potentially blocked parsing, and possibly
> inefficient use of network bandwidth due to reduced parallelization.
>  Archives remove the flexibility of a network stack to optimize
> parallelization levels for the user's current connection type (not that I
> think today's browsers actually do such a thing, at least not well; but it
> is an area with potential gains).
>
> PK
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080730/004e0e8d/attachment.htm>
Received on Wednesday, 30 July 2008 07:25:25 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:04 UTC