Re: [resource-hints] first spec draft

Comments inline.

On Jul 16, 2014, at 1:54 PM, Ben Maurer <ben.maurer@gmail.com> wrote:

> 
> 
> 
> On Wed, Jul 16, 2014 at 6:56 AM, <bizzbyster@gmail.com> wrote:
> 
> But it doesn't matter -- there are many different scenarios where knowing the expected size of the resource hint helps the UA make smarter decisions and so load the page faster. Flow control could mitigate the impact of this yes but it's suboptimal as in many cases flow control is too late -- we have already filled the pipe for some period of time unnecessarily with data we don't need.
> 
> Would TCP_NOTSENT_LOWAT (which should help the sender avoid buffering resources too early) address this issue?

Could mitigate the issue a tiny bit to the extent that the server might have less unnecessary data in socket queues when it receives the abort from the UA.

> 
> Prefetching objects based on resource hints is speculative, meaning there is always the risk that the UA will fetch bytes unnecessarily. For that reason we need to provide the UA with information that allow it to make an intelligent decision about whether or not it's worth the risk. In the case of a bandwidth constrained link, cost will be directly proportional to size so the UA needs size to make this decision. The point I'm making is that while priority tells the UA the order in which resources should be fetched to optimally load the page for best user experience, and it will often be related to the probability that the object will be needed, it doesn't tell the UA anything about the cost of issuing what could be an unnecessary request.
> 
> 
> Imagine for a second that we're talking about a website that has all resources on a single origin and is using a single HTTP 2.0 connection. Is there any reason the UA shouldn't send all requests that it knows about to the server. The number of bytes required to send all requests it might need should be relatively small. The server can make ordering decisions based on a number of heuristics -- file size, file type etc.
> 
> You could imagine cases where the size of the resource might not be known precisely at the time the request is generated. For example, a site with a large CDN might not be able to quickly access the size of every image it might serve. A site might use SDCH to better compress it's JS and CSS -- the size of a JS/CSS file would depend on what dictionaries the client had.

For that type of application, allowing the server to make object scheduling decisions might provide a benefit in terms of both bandwidth utilization and page load time over the UA making the decisions via request ordering. In my experience, the multi-origin web site is the much bigger problem in terms of under-utilized bandwidth and page load times therefore not keeping up with increasing bandwidth. I blogged on this topic recently in case you are interested http://caffeinatetheweb.com/what-makes-the-web-great-also-makes-it-slow/.

>  
> Your comment triggers an idea -- it would be great if we could send a boolean flag to indicate that the object will be needed to render the portion of the page within the viewport. Thoughts?
> 
> Keeping with the scenario of a single HTTP 2.0 origin, if we shift the responsibility of prioritization from the UA to the server, why limit the information about priority to a predefined flag? What about allowing the client to pass an opaque priority descriptor to the server?

If you own both sides then this is could be useful.

> 
> -b

Received on Wednesday, 16 July 2014 19:21:33 UTC