RE: [resource-hints] first spec draft

This part about prefetch v. caching needs to be handled very carefully if there is a change.

<snip>

  *   For prefetch & prerender, use the cache instructions (no grace period or changes)
This would cripple prefetch and prerender because most dynamic content is marked as non cacheable. Think of prerender as opening a background tab (or middle click, if you prefer), except that the tab is invisible and is then instantly swapped-in on navigation as long as it hasn't expired (insert reasonable TTL here.. Chrome uses 300 seconds).
</snip>

These methods cannot change cache behavior for assets without some way for the origin to control it.

This statement needs to be honored:
I think this is especially important for prefetch. If a web dev wants it cached, they should specify a cache private instruction.




From: Podjarny, Guy [mailto:gpodjarn@akamai.com]
Sent: Thursday, July 10, 2014 3:52 PM
To: Ilya Grigorik
Cc: public-web-perf
Subject: Re: [resource-hints] first spec draft

Good responses, and good new sections.

Some further comments inline below.

Guy Podjarny | CTO, Web BU | @guypod | www.guypo.com<http://www.guypo.com>

  *   For some domains (e.g. Your CDN domain), it’ll actually be helpful to open multiple connections, not just one (assuming no SPDY/HTTP2). Specifying a number sounds too wrong, but does it make sense to put a weight factor on the preconnect? Maybe a “primary” vs “secondary” domain? Could be getting into the diminishing returns space.
I agree that it is helpful to open multiple connections for many domains but don't see a problem with specifying a number. The draft argues that the UA is "is in the best position to determine the optimal number" of connections per domain. But this is not always the case. If the server were able to receive and leverage feedback from browsers ("past request patterns" in the draft) then it could know more about the capabilities of various domains. For instance, we see some servers allow a large number of concurrent connections and others enforce strict low limits. I think it makes sense to include a suggested number of connections in the pre-connect hint. The UA is free to ignore that suggestion.

I understand the motivation, but I still think this exposes knobs that should be left to the user agent. The number of connections will vary by users connection type, time of day, protocol, and so on, all of which are dynamic. Browsers already track this kind of information and adapt their logic to take this into account - e.g. chrome://dns/ (see "Expected Connects" column). With HTTP/2 this is also unnecessary (and I say that with awareness of your recent thread on http-wg on the subject :)).

I agree with Ilya the exact number should be a UA decision, which is the reason I suggested a more coarse grained "primary" and "secondary", to act as hints and be less specific. I think it can have an impact.



Section 2.2 (preload):

  *   With today’s implementations, double downloading of preloaded resources is a major issue. Would be good to make some explicit definitions about how to handle a resource that has been requested as a preload resource already and is now seen on the page. An obvious rule should be to not double download, but others may be more complex (e.g. What if we communicated a low prio via SPDY/HTTP2?)
- Matching retained responses with requests: https://igrigorik.github.io/resource-hints/#matching-request


Looks good.
I find the magic number 300 to be out of place in a spec focused on hints to the browser. I suggest you just say the UA should make an attempt to retain it for a reasonable time.



- (Re)prioritization: https://github.com/igrigorik/resource-hints/issues/1

Does SPDY or HTTP2 support such renegotiation?



  *   Content type as text sounds a bit error prone. Would “text/javascript” cover “x-application/javascript” too? Is there a way to normalize content types?
Jake proposed using "context" instead, which I really like, but need to do some more digging on:
https://github.com/igrigorik/resource-hints/issues/6


Sounds like a better path.



  *   Should preload resources delay unload? (my vote is no)
Preload hints are for the *current* page. As a result they are cancelled as part of onunload. If you need the request to span across navigations, you should be using prefetch, which is used to load resources for next navigation.
Damn auto-correct... I meant should preload block onload (the load event of the current page). Seems less obvious to me, but my vote is still no, unless they resource was discovered as a full resource further down.

  *   For prefetch & prerender, use the cache instructions (no grace period or changes)
This would cripple prefetch and prerender because most dynamic content is marked as non cacheable. Think of prerender as opening a background tab (or middle click, if you prefer), except that the tab is invisible and is then instantly swapped-in on navigation as long as it hasn't expired (insert reasonable TTL here.. Chrome uses 300 seconds).
I think you're referring to unilateral prerenders done by the browser, I think it's different when the page explicitly asks for it.

I think this is especially important for prefetch. If a web dev wants it cached, they should specify a cache private instruction.

  *   What about srcset and the picture element (e.g. Native conditional loading mechanisms)?
I don't see any concerns here. If you have conditional loading then you must evaluate those conditions.. With native <picture> those conditions will be executed by the preparser (yay) if the main doc parser is blocked... Yes, you may not be able to stick a Link header hint or put a <link> hint in the head of the doc, but such is the cost of conditional fetches. On the other hand, if you *know* you need a specific file regardless, feel free to hint it.
Isn't srcset short enough to consider here at least?

Received on Thursday, 10 July 2014 23:07:21 UTC