Re: Notification for new navigation timing entries

... after discussing it at TPAC, providing a proper API to cover this case
still make sense. Back to the drawing board ...

I agree with Nic that we need to deliver the actual events as part of the
notification. Otherwise we are not solving the existing problems: (a)
calling getEntries requires that each handler keeps track of events it has
already seen, (b) other handlers can clear the buffer before you can query
for new events, (c) buffer can be full and we'll drop events. Instead of
worrying about materializing events which the subscriber may not need,
perhaps we just need to provide an ability to filter events as part of the
subscription... same outcome, sans above problems.

(A) simplest option is to change earlier proposal to deliver an array of
events (no filtering):

performance.addEventListener('entries', function(events) {
  events.forEach(function(event) {
    // process event: event.entry.name, event.entry.entryType, etc...
  }
});

(B) adapt a MutationObserver-like model:

var opts = { navigation: true, resource: true, mark: false, measure: true,
renderer: false }
window.performance.observe(opts, function(events) {
 events.forEach(function(event) {
    // process event: event.entry.name, event.entry.entryType, etc...
 })
});

The benefit of the latter (or a similar API) is that it can provide options
to only listen to a particular type of event (e.g. above example subscribes
to a subset of events), and we could also define/provide additional flags
(now or in the future) to make things more efficient. Thoughts?

ig

On Fri, Oct 24, 2014 at 2:25 PM, Ilya Grigorik <igrigorik@google.com> wrote:

> After thinking about this some more, it occurred to me that you can
> implement most of what we're looking for here with existing APIs:
> - each event type has a separate and configurable buffer
> - each buffer has a "onfull" callback
>
> The application can register "onfull" callbacks for each event type it
> cares about, and even adjust the size of each buffer to optimize recency vs
> batch size - e.g. it can set the buffer size to 1 to get immediate
> notifications (I'm not implying that this is a good idea :)). In other
> words, all the building blocks are there, and arguably this is an even
> nicer model since it allows the application to specify own requirements
> (sample rates, batch sizes, etc) for each event type...
>
> There is one gotcha with above model, and that's when you have multiple
> subscribers that can stomp on each other by clearing the underlying
> buffers.. Not sure I have any good workarounds for that, but at the same
> time, that's an existing issue -- audit your libraries, eliminate the races.
>
> tl;dr: perhaps we should leave all of this to a client-side library?
>
> ig
>
>
>
>
>
> On Wed, Oct 22, 2014 at 10:28 PM, Nic Jansma <nic@nicj.net> wrote:
>
>>  I'm thinking that passing the entry, or entries, in question seems to
>> be the only definitive way of ensuring that the listener knows exactly
>> which "entries" the callback is firing for.
>>
>
> Agreed. A "something has happened" notification is not particularly
> useful, since anyone that cares about it will, by definition, need to then
> figure out what that thing actually was, which will lead to even more work
> than simply providing the event directly. That, plus the additional
> complication of having a race condition where listener A can clear the
> buffer before a second listener gets a pass at it.
>
>
>> Your suggestion to have this be fired per-type seems like a good possible
>> route, as consumers would likely only be interested in the types they know
>> about and want to deal with.
>>
>
> We can certainly scope subscriptions by entry type.. But it's not clear to
> me that would be such a huge
>
>
>> - Nichttp://nicj.net/
>> @NicJ
>>
>> On 10/22/2014 9:35 PM, Nat Duca wrote:
>>
>>  Hopefully we copy microtasks spec for the model of when the event fires.
>>
>>  But also, thought: Why are we passing in the events into the callback?
>> Can't the caller just do getEntries? Remember if we do that we've actually
>> got to construct the newly added events in a separate array from the global
>> buffer. This could make efficiently implementing the getEntries hard: in
>> chrome, we hold off creating the JS bound objects for an entry until you
>> call getEntries, which lets us *buffer up* the performance events at high
>> speed. Thats important for making this api be performance-safe.
>>
>>  Broadly, I think we should opt for the primitives here... the event
>> saying it was added. If you want to then get the entries, use getEntries.
>> or getEntriesOfType.
>>
>>  Also, how does this behave when the buffer is full? The buffer sizes
>> seem to be specifed per event type, which would imply that the event added
>> is called per event type? If one buffer is full and another isn't what
>> happens? Should this event be per event type?
>>
>>  I don't know the right solution but I smell some architectural
>> fragility here. We need to remember this is a performance api... at every
>> step of the way, it should be lean and mean. :)
>>
>> On Wed, Oct 22, 2014 at 4:01 PM, Ilya Grigorik <igrigorik@google.com>
>> wrote:
>>
>>>
>>> On Wed, Oct 22, 2014 at 3:28 PM, Nat Duca <nduca@google.com> wrote:
>>>
>>>> Forgive me for replying late in this thread but this direction is very
>>>> concerning for non-navigation uses of the timeline. Things like putting
>>>> frames in the timeline, for instance. We're loookign at huge numbers of
>>>> these callbacks, proportional to how much we want to put in the timeline.
>>>> This then means that putting things into the timeline is performance
>>>> disturbing!
>>>>
>>>>  Please, please please, consider the mutation observers model where a
>>>> single event is fired on the microtask unwind.
>>>>
>>>
>>> Hmm.. Josh's original proposal did suggest batching events, but I nudged
>>> it towards individual events:
>>>
>>> https://github.com/w3c/performance-timeline/issues/1#issuecomment-59538916
>>>
>>>  If we revert back to the batched model... how about:
>>>
>>>    performance.addEventListener('entries', function(events) {
>>>     events.forEach(function(event) {
>>>        // process event: event.entry.name, event.entry.entryType, etc...
>>>     }
>>>   });
>>>
>>>  Does that look reasonable? What's the logic for batching these events?
>>>
>>>  ig
>>>
>>>
>>
>>
>
>

Received on Monday, 8 December 2014 17:41:18 UTC