W3C home > Mailing lists > Public > public-web-perf@w3.org > March 2011

Re: Resource Timing - What's included

From: Kyle Simpson <getify@gmail.com>
Date: Wed, 23 Mar 2011 14:44:19 -0500
Message-ID: <CEFBF0A095DE4086A48DE89031AB2361@spartacus>
To: "Nic Jansma" <Nic.Jansma@microsoft.com>, <public-web-perf@w3.org>
> The problem with explicitly exposing "cache" vs. "network" is that it 
> precisely exposes privacy information about the page's visitor.

Existing dev-tools in the browser have access to all this information and 
don't expose it to JS in a way that would enable such an attack. So, I guess 
I should step back and ask, is there a way we could make this information 
available to "secure channels" (like in-browser dev tools, add-ons, plugins, 
etc) but not to potentially malicious JavaScript? If so, would that offer a 
possible relief from this attack vector?

> I think we're on the same page -- we both want RT to expose the "observed" 
> behavior of browsers.
> My example below was a simplification of the issue, and meant to point out 
> one optimization that I believe all current modern browsers implement. 
> For *static* elements within the page (e.g. <IMG />), current browsers 
> re-use prior duplicate resource URLs instead of downloading them twice. 
> From my simple HTML example, only one resource request for 1.jpg would 
> occur.  Current browsers don't re-check the cacheability of resource 
> within the *same page* for *static* resources.

Even with static resources (like <img> or <script> tags in the page), the 
presence of multiple/duplicate containers that "request" the same resource 
affects the timing of how the page is assembled/rendered. Like in the 
previous email, where I said that multiple <script> elements for the same 
script will still still cause some "blocking" because the browser has to 
assume there may be a document.write() in each, which will affect how the 
rest of the DOM is assembled/interpreted... the timing impact of such should 
be quite obvious.

But if only 1 listing in the RT array shows for the first network delay 
request, what's missing in that picture is how that resource being 
re-interpreted multiple times on the page had timing impacts on the page and 
on other resources.

>>Yes, and furthermore, I think (mostly for data filtering purposes) having 
>>the status code actually in the data structure would be important. For 
>>instance, if a
>> tool analyzing this data wants to filter out all 404's, etc.
> There may be some privacy/security aspects about exposing the HTTP 
> response code, especially for cross-origin domains.  For example, the 
> presence of a 301 response from a login page on another domain could 
> indicate that the user is already logged in.

Would it be possible to simply expose "success" or "failure" of a loaded 
item, as opposed to the exact HTTP Response code? In other words, 
1xx/2xx/3xx codes are "success", and 4xx/5xx codes "failure".

Also, same question as above, is it possible to devise a system by which 
untrusted JavaScript gets the more filtered/watered-down data (to mitigate 
attacks), but tooling/add-ons have access to the more full-fledged data 


Received on Wednesday, 23 March 2011 19:44:58 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:04:30 UTC