- From: <bizzbyster@gmail.com>
- Date: Fri, 29 Aug 2014 09:59:53 -0400
- To: Yoav Weiss <yoav@yoav.ws>
- Cc: "whatwg@whatwg.org" <whatwg@whatwg.org>, Ian Hickson <ian@hixie.ch>
We need to bite the bullet and add a priority attribute and an expected-size attribute. Priority: The web server will often have information that allows it to know better than the UA about the priority of objects. Basing this on type is not super useful when we have only a few types and we have lots of objects. The UA can ignore it but a priority field allows the web server to give the UA as much information as it has about how to download the objects to optimize load time. Why not just make it like probability and allow the web server to specify a value between 0.0 and 1.0, which 1.0 being a top priority object? Expected-size: I’ve argued this previously (https://github.com/igrigorik/resource-hints/issues/12) and Ilya agrees its a nice to have. Along with the probability attribute that is in Ilya’s latest draft, this provides a simple way to threshold which objects to prefetch at the UA. Sorry to re-raise expected-size but I think it’s relevant again in the context of priority. A small device or a UA on a bandwidth challenged link could use a simple scheme such as only preload resources above priority X with a smaller than a certain probability*expected-size when expected-size is available. Regardless the details of the logic, the UA needs all three fields (probability, priority, and expected-size) to make a good decision. We know this from 15 years experience doing prefetching in the network for satellite. Thanks, Peter On Aug 28, 2014, at 5:31 PM, Yoav Weiss <yoav@yoav.ws> wrote: > On Sat, Aug 23, 2014 at 2:44 AM, Ian Hickson <ian@hixie.ch> wrote: > [snip] > >> On Wed, 4 Sep 2013, William Chan (陈智昌) wrote: >>> >>> * Given current browser heuristics for resource prioritization based on >>> resource type, all <script> resources will have the same priority. >>> Within HTTP/1.X, that means you'll get some amount of parallelization >>> based on the connection per host limit and what origins the script >>> resources are hosted, and then get FIFO. New additions like lazyload >>> attributes (and perhaps leveraging the defer attribute) may affect this. >>> With HTTP/2, there is a very high (effectively infinite) parallelization >>> limit. With prioritization, there's no contention across priority >>> levels. But since script resources today generally all have the same >>> priority, they will all contend and most naive servers are going to >>> round robin the response bytes, which is the worst thing you could do >>> with script resources, since current JS VMs do not incrementally process >>> script resources, but process them as a whole. So round-robining all the >>> response bytes will just push out start time of JS processing for all >>> scripts, which is rather terrible. >> >> I'm not sure what to do about this exactly. >> > > Wouldn't that be something that is best handled as part of HTTP? e.g. > sending a flag with the request indicating whether the resource can be > progressively decoded or not? > > >> >> >>> * Obviously, given what I've said above, some level of hinting of >>> prioritization/dependency amongst scripts/resources within the web >>> platform would be useful to the networking layer since the networking >>> layer can much more effectively prioritize resources and thus mitigate >>> network contention. If finer grained priority/dependency information >>> isn't provided in the web platform, my browser's networking stack is >>> likely going to have to, even with HTTP/2, do HTTP/1.X style contention >>> mitigation by restricting parallelization within a priority level. Which >>> is a shame since web developers probably think that with HTTP/2, they >>> can have as many fine grained resources as they want. >> >> It's hard to come up with a super fine-grained model that works well with >> multiple competing scripts, but we can do better than what we have now, >> certainly. It seems we can at least split things into the following >> categories, in order of highest priority to lowest: >> >> 1. resources that are needed and are causing something to block >> e.g. <script src="foo.js"></script> >> 2. resources that are needed and are neither blocking anything nor >> explicitly deferred >> e.g. <img src="foo.png" ...> >> 3. resources that are needed but are deferred >> e.g. <script src="foo.js defer></script> >> 4. resources that the browser wants >> e.g. <link rel=icon>, <html manifest> >> 5. resources that are not yet needed but which the author wants >> precached when possible, and which have not been marked deferred >> e.g. <link rel=subresource href=...> >> 6. other resources >> >> Is that fine-grained enough? >> > > Wouldn't the "needs" attribute enable the browser to create a dependency > tree that would allow for finer grained priorities? > e.g. > 1. Needed resources with no dependencies, that block initial render > 2. Needed resources that blocking resources need (e.g. the jquery script in > <script src="foo.js" needs="jquery"></script> (We can have multiple levels > of priorities here, if the dependency tree is high) > 3. Needed blocking resources > 4. Needed non-blocking resources > etc. > > If I understand correctly that would provide the fine-grained priorities > that Will is after and that will enable the network layer to be smarter > about which resource is needed next. > > [snip] > >> >> Pulling all of the above together, here's the tentative proposal: >> >> These "loadable" elements: >> >> <script>, <link>, <style>, <video>, <img>, <object>, <iframe>, <audio> >> >> ...get the following new attributes: >> >> needs="" Gives a list of IDs of other elements that this one >> needs, known as The Dependencies. Each dependency >> is added to this element's [[Dependencies]] in the >> ES6 loader. >> >> load-policy="" The load policy. Consists of a space-separated >> set of keywords, of which one may be from the >> following list: block, async, optimistic, >> when-needed, late-run, declare. The other >> allowed keywords are precache, low-priority, >> and force. (Maybe we disallow "block" and >> "force" since they're for legacy only.) >> Different elements have different defaults. >> "precache" isn't allowed if the keywords >> "block" or "async" are specified, since those >> always load immediately. >> > > Can you perhaps expand on what each of these would mean? > > [snip] > >> >>> [Use-case P:] download dynamic page components (e.g. maps) only on >>> larger devices. >> >> Long term, we could add a media="" attribute to <script> to make this >> easier. Short term, you can do it with scripts by checking the width of >> the device and calling load() on the script if you want it. >> > > Wouldn't that still download the resource, and just avoid the > parsing/execution part? > > I think we agree regarding the long term solution here. I'm fine with the > short term one being "use a script loader for this case". > > >> If you're a browser vendor who wants to implement <script media>, please >> comment on this bug: >> >> https://www.w3.org/Bugs/Public/show_bug.cgi?id=23509 > > > I'm interested in discussing how would <script media> work. I'll continue > that discussion in the bug.
Received on Friday, 29 August 2014 14:00:31 UTC