Re: [w3ctag/design-reviews] Event Timing API (#324)

Replying to all the comments, sorry in advance for the length.

> * We would like to see a clearer problem statement - who is this for, when is it used, how does it produce a user benefit?

Did you get a chance to read over https://wicg.github.io/event-timing/#sec-intro? I think that the introduction section of the spec answers these questions.

> * It would be nice to see an end-to-end example of how the specific data that comes back from this API could be used to improve the user experience

There are some examples at https://wicg.github.io/event-timing/#sec-example.


> * WRT the security & privacy considerations in the explainer: we would like to see some more detail here - e.g. the ways in which this could be misused, if any

https://wicg.github.io/event-timing/#priv-sec has a bit more detail. We haven’t come up with ways in which the API could be misused because the information is only exposed to the appropriate frame target. Even with clickjacking, the timing information exposed does not seem to be a problem because it will measure work done by the attacker.

> * Yes, we would like to see the self-review questionnaire filled out, please.

Sure! Here it is https://docs.google.com/document/d/1fxwq_Fl3wx4YI-djkoDRDBN6-rYzOGUFnNFGChGF58Y/edit?usp=sharing


> * If performance metrics will be gathered from event types for which no event listeners are registered, as the proposal implies, then how is that data useful? E.g., a web site may not have registered any touch events, but the user is using their finger to manipulate the content… does the browser need to record the timing information for what might have happened if the touch events were dispatched?

I think this question mainly concerns first input, although technically this could happen for first input or for any event so let me answer those separately. For first input, we do not want this metric to change with unobservable changes from the user’s perspective. So for example if a website adds an event listener for the whole page, that should not improve its FirstInputTiming. If we disregard events without listeners, then it would. The reason for this is that before adding the page-wide event listener, events not hitting event listeners would be ignored but would take a short time to process. But after adding it, those events which take a short time to process would be considered first inputs. Thus, including events which do not have registered event listeners is more in line with the user experience and therefore we should be including them.
Regarding other events, note that in order to be reported they’d need to pass the threshold, which means a lot of work would need to be happening in the rendering pipeline, since event handlers will take no time to run. This is useful to surface (worth investigating if events cause a lot of rendering). Events without listeners can still cause a lot of work (think scrolling, hover effects), so we should inform developers when they are handled slowly.

> * Is the observer type 'events' too generic? UAs might prefer a more fine-grained event type (e.g., 'mouse-events') to cut-down on the perf impact that observing will have…

I disagree with it being too generic if we consider other entries. Generally entryTypes are not that specific (we have one ‘resource’ for ResourceTiming, for example). It’s a good point that we do not want to forward an unreasonable amount of entries, hence why we set a high threshold (56 ms) for the |duration| that is required in order for entries to be emitted. In the near future, PerformanceObserver will support more parameters for its observe() method, so we’d like to add a parameter to allow you to further filter the events seen, by event type.

> * One assumption in the spec is that this timing data will be useful to help sites determine causal issues for slow events (that may be causing "smoothness" issues in the user experience). However, it might not be good to assume that there is always correlation between the hw timestamp of the UI event and the time that it is dispatched by the UA. For example, servicing multiple queues internally, animations running on the UI thread, decoding/encoding processes, web audio graph processing, etc., to say nothing of browser-external factors like limited memory conditions causing disk-swapping or page-faulting, or large numbers of open apps competing for CPU time by the OS scheduler. All of these things are happening and can lead to high variability in causality for slow events (especially in that time between received timestamp and dispatch). Setting a high bar for emitting the record (50ms) is one way to cut down on the noise, but it may not be sufficient.

That’s true and it may be possible to obtain entries corresponding to events where the work is not caused by the event, but in aggregate you should be able to notice events that tend to be surfaced more often. Chrome’s input team investigations on input latency show that work is almost always dominated by JS execution. It's true that false positives will happen sometimes, but it should be rare.

> * In many of the scenarios, the standards performance metrics APIs (mark/measure) are seemingly insufficient because they don't capture the time before the resulting display update (if this assumption is wrong, please correct). I wonder how correlated the cause-and-effect of events to UI change/updates often is. For example, one scenario describes hovering a menu item that triggers a flyout. (We'll assume that this hover/flyout behavior is triggered by an onmouseover event handler, because CSS hover menus do not need script--would those be counted?) There may be an onmouseover event that run, but triggers asynchronous code that eventually causes the menu to appear--in this case, you've lost the causality relationship. I fear there are far too many of these kinds of scenarios in author code, that the metrics collected from these event timings would be useless.

It’s true that this API will miss capturing events which trigger async work which is NOT executed before the next time the user agent displays pixels on the screen. But tracking this seems hard - we’d probably require some way for the developer to notify us when the event work has been completed. We have an idea to do this: event.measureUntil (it won’t go in our initial API though). I do think that while some use cases will not be possible due to this limitation, we still provide a lot of value from surfacing events which are problematic because they block the user agent from updating the display.

> * Perhaps one way to improve the utility of this proposal would be to get more specific--just tracking all mousemoves (for example) may not be terribly useful or provide insightful data. However, if there is a particular element subtree that you are interested in observing the mouseover characterisitics, that it could be more interesting (because it is more specific). In this case, you want to be able to register for performance metrics for events in combination with an element, or perhaps layout box?

This assumes a motivated developer. In most cases, we expect this API to be used by RUM analytics vendors, without developer intervention. RUM analytics won't know which elements are important. It also assumes that motivated developers have insight on what's slow, which I don't think is true. Surfacing which element is targeted is something we've discussed, but we decided it could wait for a future version.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/324#issuecomment-461923068

Received on Friday, 8 February 2019 19:43:54 UTC