Re: Cross-Origin Restrictions

Im w/ Patrick here:

1) inferring if an object is in cache or not can already be done by looking at the download times of the object and the RTT
2) if the default is to not send the times, 3rd party providers will not change their stacks to start sending "timing-allow-origin", so we will only see timings of local objects; given the current trend of increase of 3rd party content (cdns, cloud providers, apis), local objects will be a diminishing case.
3) my  suggestion is opt-in by default for http and opt-out by default for https for timing visibility

Cheers,

--Ricardo

On Oct 6, 2011, at 8:11 AM, Sigbjørn Vik wrote:

> On Thu, 06 Oct 2011 14:26:24 +0200, Patrick Meenan <pmeenan@webpagetest.org> wrote:
> 
>> Are the concerns with providing the specific component times for 3rd-party resources or to provide the timing information at all (and does the opt-in http header even address the concerns)?
> 
> This is for including detailed 3rd-party information.
> 
>> Unlike CSS :visited, to make any use of the timing information you have to actively probe resources on the network and you can already get overall timing information from javascript so the component times would be the only new information being offered (and possibly a higher accuracy for the overall time).  Do the component times offer information that couldn't be gleaned through active probing today?
> 
> Yes, it offers more detailed information in an easier and more reliable manner. A shift from e.g. a 50% chance of a good estimation to a 100% chance of an accurate result is quite significant.
> 
>> If I want to know if you have been to a banking site I can just request a static resource from the bank and time it - then compare the time to the expected RTT (different cacheability of resources will give me different levels of information).  Granted, it's one step more complicated than just referencing the resources and getting the timings but it's not offering up new information that you couldn't probe for before.
> 
> The breakdown of where time went cannot be found without the new API. Over a network which varies a lot, it is curently impossible to determine such sub-times, but that will be possible with more details available. Making it easier and more detailed is just making black hat's lives easier.
> 
> One might as well ask the opposite question: If this new API didn't expose new things, what would the point of it be? Javascript libraries could add in timing information, and web developers could use that instead - most likely they will end up using the new APIs through some such libraries anyhow.
> 
>> Ultimately the benefit for users will come from improved page performance as site owners (and 3rd-party widget providers) get information about the performance of all aspects of their pages from the field.  The component times would help diagnose issues faster (is it a back-end problem, a networking problem or a GSLB routing issue) but those can actively be investigated through other means.
> 
> True, there are possible indirect gains, but not gains which depend on the individual user's information, nor any gains which matter to the user during that page load. If I as a use visit some shady site, I really don't care about possible gains for that site some time in the future, but I do care about my own privacy. (If I do care about that site's future, as well as trust the site, I might be convinced by the site to enable some site specific pref to give it more information though.)
> 
> -- 
> Sigbjørn Vik
> Core Quality Services
> Opera Software
> 

Received on Thursday, 6 October 2011 23:08:00 UTC