Re: Notification for new navigation timing entries

On Thu, Feb 12, 2015 at 2:29 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:

> On 2/12/15 5:20 PM, Nat Duca wrote:
>
>> The design goal I had in mind for this was to avoid construction of
>> PerformanceEntry objects until you actually needed them. Accessing these
>> entries is quite slow to just buffering them.
>>
>
> Here's a question: why?
>
Here's my reasoning, maybe I've gone off the rails. It happens often. :)

I think there are a ton of great use cases for PerformanceTimeline that are
enabled when it can contain an order more events than it does today. Right
now, it is architected with a few hundred events from resource timing.

When we start pushing frames into there, we start looking at 60 events per
second. Or, since there are two events in each, we're looking at 120 events
a second.

But thats' not even really what worries me. At that level, creating objects
is still probably okay.

The worry here is that people who find value in structurally tracing their
app using user timing.

Some of the more advanced sites I've seen add quite a lot more detail than
that. We've marked up youtube for instance with a ton of structural
annotations using http://google.github.io/tracing-framework/.  There you
can get thousands of user timing events just in the three hundred ms of
starting a video....

(We're doing this with our browsers: the one behind pictures like
https://hacks.mozilla.org/2015/01/project-silk/ [the name escapes me]
creates pictures like this https://hacks.mozilla.org/files/2015/01/silk.png
which is probably tens of events per *frame*. Chrome tracing has thousands
of events per frame,
http://www.chromium.org/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs
.)

Most of these tools operate in a "record, do something, stop recording and
get events" mode. The idea with those approaches is that you can buffer
cheaply, and only do expensive work during the get events stage.



Issuing hundreds and thousand of events into the user timing api per frame
seems completely plausible from the call-site point of view. They're just
call sites.

But, if an observer is registered, if we do this wrong, then we can get in
a situation where we're create PerformanceEntry objects thousands of times
a frame. My hope is we could avoid that situation.


Note that for cases when the implementation doesn't immediately reify the
> PerformanceEntry objects, it ends up needing to cache the reified versions
> and whatnot, right?

Definitely. Once accessed, they need to exist.

Talking to our v8 peeps, we can't take plain array types and lazily create
them. I haven't exhaustively poked at this to see how hard it would be for
us to change this, but I was a bit reluctant to tie PerformanceObservers'
fast-ness into this given how useful it'd be if we just got it out there.


That is, it was intentional to not have forEach and other things on
>> these arrays.
>>
>
> Given a length and indexed getter, why not?  I mean, the caller can
> clearly just walk along the whole list reifying the PerformanceEntry
> objects anyway...


As I think about it, maybe there shouldn't be any index/etc getters and
only an AsSequence(). Eg:

interface LazyPerformanceEntryList {
    bool HasEntryNamed(string);
    PerformanceEntryList AsList();
}

That way you have to explicitly ask for the entries.

But such a direction change hinges on you buying my argument that the
observer callbacks should have overhead only proportional to the callback,
rather than the size of the event list being delivered. If you don't buy
that, then I'm out on a limb anyway. :)

Received on Friday, 13 February 2015 00:15:42 UTC