W3C home > Mailing lists > Public > public-web-perf@w3.org > June 2011

RE: Resource Timing - What's included

From: Nic Jansma <Nic.Jansma@microsoft.com>
Date: Fri, 10 Jun 2011 21:03:55 +0000
To: "public-web-perf@w3.org" <public-web-perf@w3.org>, Kyle Simpson <getify@gmail.com>
Message-ID: <F677C405AAD11B45963EEAE5202813BD19E2C7E6@TK5EX14MBXW651.wingroup.windeploy.ntdev.microsoft.com>
Could you expand on the use-cases you're trying to address?  Here are a few that you've mentioned:

1.       Detection positional blocking behavior, for example, a LINK element after a SCRIPT element.

2.       "It's useful information to analyze simply how many duplicates of a resource are on a page (memory consumption purposes, etc)"

3.       "I think that same array of resources could potentially have new attributes added to each entry, such as their rendering timings (startRender, endRender, etc)"

For #1, while I understand that (today's) browsers have certain behaviors that can block page load (such as SCRIPT/LINK ordering), I'm not sure I see the benefit of putting this information in the ResourceTiming array.

As you mention, it would be very useful to warn developers of this potentially blocking behavior, but that's why I think it's more appropriate to expose this information from a browser's Developer Tools.  The goal of ResourceTiming is to expose data from-the-wild that you can't get today. Document authoring and element ordering problems are something a developer can look for and debug on their own machines (via Dev Toolbars) during development.

Additionally, even though today's browsers block for several scenarios, that behavior may change in the future.  What if the next release of Browser X no longer blocks on this ordering?  The ResourceTiming array doesn't tell you that something blocked, so including placeholder elements in there for todays' browser behaviors merely exposes element ordering within the document, not the browser's actual behavior when parsing that element.

Finally, isn't it possible to look for simple ordering issues today by element enumeration?  This won't find complex cases where elements are nodes apart, but you could improve this to catch more cases:
            var elements = document.getElementsByTagName('*');
            for (var i = 1; i < elements.length; i++)
                if (elements[i].tagName == "LINK" &&
                    elements[i].rel == "stylesheet" &&
                    elements[i-1].tagName == "SCRIPT")
                    alert("LINK " + elements[i].href + " follows SCRIPT " + elements[i-1].src);

For #2, I'm not sure that I agree that it's useful to analyze how many elements on the page link to the same resource.  If 30 IMGs all use the same icon (a "Like" icon for example), they shouldn't consume any more memory that the elements normally consume just because they link to the same URL.

For #3, you have a point that if we don't include all duplicated resources (such as multiple IMGs that link to the same .png), then it won't contain the complete list of all IMGs with potential future startRender and endRender attributes.  Though if we're going down that path, all elements that have render time, network or not (such as embedded SVG), should be included in the array as well.

The size of the ResourceTimings array will expand quite a bit then.  Our default "safe" buffer of 150 ResourceTiming elements will not be enough, as the number of elements that we put in the array could expand significantly.  One of the goals of ResourceTiming is to have the data we collect be lightweight enough to be always-on, for every page load.  If the ResourceTiming array starts getting hundreds of timing events, I don't think we would be able to keep it always-on.  This creates challenges in ease-of-use for analytics scripts and Dev Toolbars consuming this information, as they will require sites to update themselves to turn it on.

- Nic

From: public-web-perf-request@w3.org [mailto:public-web-perf-request@w3.org] On Behalf Of Kyle Simpson
Sent: Thursday, June 09, 2011 4:02 PM
To: public-web-perf@w3.org
Subject: Re: Resource Timing - What's included

In principle, I don't disagree with this approach. However, I think the more important issue is which elements are in the array. The original design of this array is that only one entry per unique resource-URL would show up, and my contention all along is that every single container (which references an external resource) should have an entry in the array, even if that means duplicates.

In your model, that would reduce the amount of "waste" of empty 0's for network timing, because there wouldn't be properties automatically exposed, but instead each element's specific type of info could be queried on-demand.

So, are you agreeing that all containers (even duplicates) will appear in the array? If so, I agree with your idea. If not, I think there's still a fundamental "miss" in terms of the use-case I'm trying to address.


From: James Simonsen<mailto:simonjam@chromium.org>
Sent: Thursday, June 09, 2011 4:43 PM
To: Kyle Simpson<mailto:getify@gmail.com>
Cc: public-web-perf<mailto:public-web-perf@w3.org>
Subject: Re: Resource Timing - What's included

On Fri, Jun 3, 2011 at 8:18 AM, Kyle Simpson <getify@gmail.com<mailto:getify@gmail.com>> wrote:
In the future, I think that same array of resources could potentially have new attributes added to each entry, such as their rendering timings (startRender, endRender, etc). But I'm not asking for that now. I am only asking that they be included in the array, as foundation for future improvements.

I fully agree that we need to have a foundation for exposing further information, but I'd like to do it in a slightly different way.

I'm worried that trying to put all possible information about an element in one place will lead to a lot of empty or unused information. For example, script parsing time is relevant to inline scripts, but network timing is not. So all those fields would be empty, but taking up space. Likewise, I don't expect every developer to care about every bit of information on a resource.

Instead, I'd like to solve the problem using composition. I would have one API (Unified Timing) that can access different aspects about the element. In my proposal, you could query PERF_SCRIPT to get an object containing parsing time and PERF_RESOURCE to get an object containing network information. The browser will only provide the ones that are populated. Developers would be free to pick and choose which of those are relevant to them.

Received on Friday, 10 June 2011 21:05:02 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:01:08 UTC