- From: Ilya Grigorik <igrigorik@google.com>
- Date: Tue, 2 Sep 2014 15:06:09 -0700
- To: Ben Maurer <ben.maurer@gmail.com>
- Cc: public-web-perf <public-web-perf@w3.org>
- Message-ID: <CADXXVKqmvi1kV-b=Or+Rr2ugAWhuKaFSOmeFZ8TRFDkk9Txi5w@mail.gmail.com>
On Tue, Sep 2, 2014 at 2:44 PM, Ben Maurer <ben.maurer@gmail.com> wrote:
> Yeah, I think that would be pretty helpful. I think you'd really need a #
> of bytes field to make good sense of this (what if a proxy decides to
> rechunk the document).
>
Right, good point. So, something like:
{
responseStart: 158681.0310000000000000,
responseEnd: 159071.0310000000000000,
transferSize: 12345
}
And with that we, once again, we get back to our favorite discussion of
exposing resource size info in RT/NT.. :-)
> One advantage I could see to the declarative performance mark method is
> the ease of use. It takes quite a bit of sophistication to record the
> mapping of #of bytes => part of page. One other thing you could imagine a
> declarative interface doing is exposing more than just the networking time
> ("here's how long it took to execute all the JS before this point", "here's
> how long it took until this point would have been painted on the screen").
>
To me the main benefit would be that we don't have to insert an inline JS
block.. and block the parser on it... thus avoiding the gotcha of
perf-monitoring code negatively affecting the performance of the page its
measuring. Whether that's worth it though, is a separate discussion. Also,
note that if the mark is parsed by doc parser then the (blocking) JS time
would be automatically factored in.
ig
> On Tue, Sep 2, 2014 at 2:29 PM, Ilya Grigorik <igrigorik@google.com>
> wrote:
>
>> Ben, this is a tricky one. It sounds like what you're actually asking for
>> is timing data for "named chunks"...
>>
>> Even if we had a declarative performance mark, it seems odd to record it
>> as part of the preload-scan. Instead, I would expect it to be processed by
>> the doc parser - i.e. this chunk of markup was processed at this point.
>> Then again, while that has some benefits (don't need to inline JS and block
>> parsing on it; wait for stylesheets), it doesn't actually address your
>> specific problem.
>>
>> As a thought experiment: what if we had chunk timings as part of Nav /
>> Resource Timing? For example:
>>
>> var r = performance.getEntriesByName('http://somesite.com/resource');
>> >> {
>> responseEnd: 160263.62900000095,
>> ... (snip) ...
>> startTime: 158183.55500000325,
>> entryType: "resource",
>> name: "http://somesite.com/resource"
>> chunks: [
>> {
>> responseStart: 158681.0310000000000000,
>> responseEnd: 159071.0310000000000000
>> },
>> ...
>> {
>> responseStart: 159867.9795000000000000,
>> responseEnd: 160257.9795000000000000
>> }
>> ]
>> }
>>
>> Granted, you can't peek inside the chunk and see its contents (or assign
>> a label to it), but this would nonetheless give you a direct view into how
>> the HTML (or any other resource) was streamed from the server. It seems
>> like a generally useful thing, and I know that YouTube folks have asked for
>> exactly this in the past... Would something like that help answer what
>> you're after?
>>
>> ig
>>
>>
>>
>>
>>
>> On Thu, Aug 28, 2014 at 12:50 PM, Ben Maurer <ben.maurer@gmail.com>
>> wrote:
>>
>>> Hey,
>>>
>>> One problem I've seen people face recently is measuring the arrival
>>> times of an incrementally rendered page. I wanted to get people's thoughts
>>> on (1) if there's some easy way to do this that I'm missing and (2) if not,
>>> would it be worth creating a way.
>>>
>>> Imagine the following page:
>>>
>>>
>>> <script>
>>> window.performance.mark("header_arrived");
>>> requireStylesheet("header.css");
>>> renderHeader();
>>> window.performance.mark("header_loaded");
>>> </script>
>>>
>>> <!-- the server flushes the content up to this point.
>>> it takes 500 ms to generate the following content -->
>>> <script>
>>> window.performance.mark("feed_arrived");
>>> requireStylesheet("feed.css");
>>> renderFeed();
>>> window.performance.mark("feed_arrived");
>>> </script>
>>>
>>> The {header|feed}_arrived event is intended to measure how long it took
>>> to get the bytes of data to the client. But in reality, it also measures
>>> the time it takes to load CSS stylesheets (because if you add a CSS
>>> stylesheet it blocks the execution of future script tags) and the amount of
>>> time it takes to execute the javascript functions renderXXX (because those
>>> also block future script tags).
>>>
>>> I've seen a number of incidents where somebody thought that there was a
>>> regression in getting data to the browser when what was actually happening
>>> is that they were blocked on JS/CSS.
>>>
>>> To solve this, it seems like one would need some kind of declarative
>>> performance mark that was scanned by the preload scanner and was not
>>> subject to being delayed by javascript and CSS on the main thread.
>>>
>>> Any thoughts?
>>>
>>
>>
>
Received on Tuesday, 2 September 2014 22:07:16 UTC