- From: Ilya Grigorik <igrigorik@google.com>
- Date: Thu, 2 Apr 2015 10:38:33 -0700
- To: Nat Duca <nduca@google.com>, Philippe Le Hegaret <plh@w3.org>, Boris Zbarsky <bzbarsky@mit.edu>, Eli Perelman <eperelman@mozilla.com>
- Cc: public-web-perf <public-web-perf@w3.org>
- Message-ID: <CADXXVKrB5z_aWrrn5uTVvNRbvd7p7pTSvJ-gA1OSYO4syowJXA@mail.gmail.com>
With the benefit of a few in-person and email conversations on this... never mind me! :) Closed previous pull and opened a new one, which should match what we discussed previously on this thread: https://github.com/w3c/performance-timeline/pull/10 Would appreciate any comments or feedback. Still very much a work in progress, and I've left a few question and TODO's in the PR description. ig On Mon, Mar 23, 2015 at 11:16 PM, Nat Duca <nduca@google.com> wrote: > I commented on the PR, but I would really like to keep this lean and mean. > > By allowing each observer to maintain its own buffer, with independent > clearable queues, we increase the complexity of an efficient browser-side > implementation, and I think, increase the buffering complexity. Right now, > we pass a single list into all the observers... its quite simple to > implement. > > I appreciate the interest in an API with improved ergonomics, but we could > pretty easily build the proposed api as a polyfill that we can give out to > people, no? > > On Fri, Mar 20, 2015 at 12:45 PM, Ilya Grigorik <igrigorik@google.com> > wrote: > >> I'm working on trying to define the proposed interface [1] within >> Performance Timeline, some thoughts as I'm trying to spec it... >> >> From a developers perspective, the proposed API is as follows: >> ``` >> var observer = new PerformanceObserver(function(events) { >> // events is LazyPerformanceEntryList with getEntries, >> getEntriesByType, etc, methods. >> }); >> >> observer.observe({eventTypes: ['render', 'composite', 'resource']}) >> observer.disconnect(); >> ``` >> >> The events (LazyPerformanceEntryList) object might contain one or more >> PerformanceEntry objects, and the application can choose to process the >> list immediately, or defer processing until the "time is right" -- e.g. you >> probably want to wait until you have some idle time to avoid competing with >> app-critical processing. In fact, especially with Frame Timing, the latter >> (deferred) use case is the recommended route.. but the API ergonomics for >> this are not great: >> >> (a) As a naive developer I'm likely to just create an array and start >> pushing the LazyPerformanceEntryList's onto it, such that I can process >> them later. However, now I have an array of "lazy lists", each of which >> supports the getEntries{ByName, ByType, ..}() methods, but I can't query >> the array itself with the same methods. Now I have a nested foreach and >> this feels awkward... >> >> (b) Perhaps we could extend the LazyPerformanceEntryList to be >> "appendable"? Now, as a developer I get the LazyPerformanceEntryList on >> first callback, to which I can retain the reference and push other lists >> onto it? This allows me to construct a single "lazy list" which I can query >> with getEntries* methods. This feels a bit better...? >> >> (c) What if the UA automates step (b)? One way to approach it: once the >> PerformanceObserver is registered it starts appending observed entries into >> a single LazyPerformanceEntryList; the callback is fired as it would >> previously, but instead of a new list within each callback we simply return >> a reference to the same list owned by that PerformanceObserver.. In effect, >> the PerformanceObserver has a "local lazy timeline" which automatically >> buffers the events and provides same access methods (getEntries*) as the >> global Performance Timeline. This makes it very simple to work with for a >> developer... >> >> + It removes the burden of efficient implementation from the developers; >> they can't get it wrong. >> + It is simple to explain and work with: each observer maintains a local >> timeline that's active while the observer is registered. >> + It allows efficient buffering and minimizes number of created objects >> on both ends. >> + It works with buffered + immediate delta processing approaches: >> * Buffer until you're ready to process, then apply your logic and call >> clear() to reset.. repeat. >> * You can also process in each callback and immediately call clear() to >> reset each time. >> >> Thoughts, comments? </vigorous handwaving> >> >> ig >> >> P.S. I have the beginnings of a very rough attempt at (c) here: >> https://github.com/w3c/performance-timeline/pull/8#issuecomment-83740635 >> >> [1] >> https://docs.google.com/document/d/1fXtxtPC1Gg4PeLXI_axj6AvMTznf9X5lrj5HTyR3r3w/edit# >> >> On Fri, Feb 13, 2015 at 10:49 AM, Ilya Grigorik <igrigorik@google.com> >> wrote: >> >>> >>> On Fri, Feb 13, 2015 at 10:45 AM, Nat Duca <nduca@google.com> wrote: >>> >>>> interface LazyPerformanceEntryList { >>>>> bool HasEntryType(string); >>>>> PerformanceEntryList getEntries(); >>>>> PerformanceEntryList getEntriesByType(DOMString entryType); >>>>> } >>>>> >>>> This sounds awesome. I updated the doc with this text. >>>> >>> >>> Would it also make sense to expose getEntriesByName(DOMString >>> entryName)? For example, I have an observer for "subresource" entryType, >>> but I only care about "widget.com/thing" and it'd be nice to be able to >>> skip iterating over all the ResourceTiming objects each time to check for >>> it? >>> >>> >>> >>> >> >
Received on Thursday, 2 April 2015 17:39:41 UTC