W3C home > Mailing lists > Public > public-web-perf@w3.org > January 2015

Re: "Packing on the Web" -- performance use cases / implications

From: Ilya Grigorik <igrigorik@google.com>
Date: Wed, 21 Jan 2015 10:28:45 -0800
Message-ID: <CADXXVKoB5VUAVYELiRo7kid9PYNXU3Ug7mZDmP5FyA_jrs4bdQ@mail.gmail.com>
To: Yves Lafon <ylafon@w3.org>
Cc: Alex Russell <slightlyoff@google.com>, Travis Leithead <travis.leithead@microsoft.com>, Mark Nottingham <mnotting@akamai.com>, Yoav Weiss <yoav@yoav.ws>, public-web-perf <public-web-perf@w3.org>, "www-tag@w3.org List" <www-tag@w3.org>, Jeni Tennison <jeni@jenitennison.com>
On Tue, Jan 20, 2015 at 3:56 PM, Martin Thomson <martin.thomson@gmail.com>

> On 20 January 2015 at 14:15, Ilya Grigorik <igrigorik@google.com> wrote:
> > On Tue, Jan 20, 2015 at 10:46 AM, Martin Thomson <
> martin.thomson@gmail.com>
> > Martin, are you commenting on the original or the new proposal that
> removes
> > payloads from the package? FWIW, I think the new proposal (just the URLs
> of
> > resources, no payloads), removes the performance concerns and defers
> them to
> > the transport layer (where they belong)... which leaves us with just
> > usability - e.g. a single URL for sharing/distribution of some bundle of
> > files.
> I refer to the new suggestion.  This new proposal is an incomplete
> replacement for the incumbent proposal.

Can you elaborate on what the missing components are?

> From discussions at Mozilla, the primary advantage of packaging was
> the usability issue.  In fact, there seems to be a moderate amount of
> antipathy toward addressing performance problems using bundling.  For
> one, bundling encourages patterns we've been actively discouraging.

Yes, exactly same concerns here. Bundling response bodies introduces far
too many issues and perf pitfalls.

On Wed, Jan 21, 2015 at 2:07 AM, Yves Lafon <ylafon@w3.org> wrote:

> On Thu, 15 Jan 2015, Ilya Grigorik wrote:
>  A bit of handwaving on pros/cons of a ~manifest like approach:
>> + Single URL to represent a bundle of resources (sharing, embedding, etc)
>> + Fetching is uncoupled from manifest: granular caching, revalidation,
>> updates, prioritization.. all of my earlier issues are addressed.
>> + You can make integrity assertions about the manifest and each
>> subresource
>> within it (via SRI)
>> + No complications or competition with HTTP/2: you get the best of both
>> worlds
>> + Can be enhanced with http/2 push where request for manifest becomes the
>> parent stream against which (same origin) subresources are pushed
> Well, HTTP2 is using a dependency graph now, how about this manifest be a
> serialized version of it? It could help in the case of an HTTP/1.1 client
> talking to an HTTP/1.1->HTTP2 gateway/cache to do prioritization.

This is a slight tangent, but I believe we need to (a) teach fetch() API to
communicate dependencies to the net stack, and (b) we need to surface
fetch-settings attribute (or some such) on elements to allow for this in
declarative markup as well. With (a) and (b) in place, we can "serialize"
the dependency graph via vanilla <link> and fetch() calls.

With that in mind.. *Alex/Jeni:* can you elaborate on why or why not <link
rel=import> as I outlined above [1] is not sufficient to express a
"package"? It seems like it provides all the necessary pieces already, plus

> But one use case of the package format was to be able to send the whole
> package instead of the first URL. In your proposal you still have to
> generate all requests.

Yes, and I strongly believe that's the right behavior if the consumer of
that package is a browser and/or any tool that can initiate fetches
programmatically --  doing so allows it to perform granular fetching,
caching, revalidation, prioritization, resolve duplicate sub-dependencies
between subresources across different packages, ..., and so on.


[1] http://lists.w3.org/Archives/Public/public-web-perf/2015Jan/0041.html
Received on Wednesday, 21 January 2015 18:29:53 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:01:27 UTC