W3C home > Mailing lists > Public > public-web-perf@w3.org > March 2018

Re: Minimal event timing proposal

From: Timothy Dresser <tdresser@chromium.org>
Date: Tue, 27 Mar 2018 18:26:04 +0000
Message-ID: <CAHTsfZBE9EH0VL=pcnOANYQ0jQSJZU4oOtE5cPf5ZqhJsorLPA@mail.gmail.com>
To: Mark Rejhon <mark@blurbusters.com>
Cc: Todd Reifsteck <toddreif@microsoft.com>, Ilya Grigorik <igrigorik@google.com>, "public-web-perf@w3.org" <public-web-perf@w3.org>
In scenarios like you describe, I suspect that having event listeners
compare performance.now() to event.timeStamp is going to be fairly
effective. This API primarily ads value for events taking place early in
page load, and for events which don't have event listeners, but are related
to slow browser behavior.

Regarding your concerns on coalesced input, I believe that's out of scope
here, but it's addressed by the coalesced points API here
<https://w3c.github.io/pointerevents/extension.html#dom-pointerevent-getcoalescedevents>.
Note that the event timing API is completely unrelated to the rate of event
dispatch.

Tim

On Tue, Mar 27, 2018 at 2:17 PM Mark Rejhon <mark@blurbusters.com> wrote:

> I think it's fine to continue down this path, as long as the 50ms
> threshold is configurable.
>
> Just to clarify:
> -- There's a computer-side display latency (e.g. various driver-based
> synchronization APIs, often very co-operative with the display, such as
> 2-way communications with GSYNC signaling) and a display-side display
> latency (which cannot be detected).
>
> There are situations where adjusting 50ms down to sub-1ms threshold is
> favorable in specific special applications (or via a permissions API).
> Events might need to be emitted in real time at sub-millisecond levels
> (e.g. 2000 events per second from a high-frequency input device emitted at
> 0.5ms intervals etc).  Obviously, this isn't necessary for the vast
> majority of applications but the standardization path should be
> protected-for for the configurability of this threshold to abnormally small
> values on a case-by-case basis where decoupling the input event rate from
> the display refresh rate, and allowing real-time non-coalesced
> notifications, is beneficial for certain kinds of high-performance
> applications (Informing VR renderers earlier of input, rather than
> buffering-and-coalescing, helps it prepare certain types of renderers in
> advance, even if frame rate is much lower than input rate -- e.g. it's also
> useful in certain low-lag programming techniques to have realtime stream of
> single-events at 2000 separate events per second of a 2000 Hz input device
> -- even if rendering is only, say, 90 frames per second).
>
>
>
>
> On Tue, Mar 27, 2018 at 1:09 PM, Timothy Dresser <tdresser@chromium.org>
> wrote:
>
>> Long term, I think enabling reducing the threshold (for event timing and
>> long tasks) makes sense, but I don't think we should block initial
>> standardization on this.
>>
>> This proposal considers measuring the duration between event dispatch
>> until the display actually updates to be out of scope.
>> I believe we should land what I've got proposed, and then consider adding
>> an additional "nextPaint" (name TBD) attribute as a followup.
>>
>> This API provides significant value without knowledge of the display
>> time, and including the display time introduces a bunch of additional
>> complexity.
>>
>> The primary counter-argument is that it would make more sense to
>> threshold on the time from the event's hardware timestamp until the display
>> is updated, instead of just until event processing is finished. If an event
>> is handled quickly, but it causes a slow frame to be produced, we won't
>> report a record.
>>
>> There are a few ways we can deal with this:
>>
>>    - We could change to threshold on the duration until display when we
>>    spec the "nextPaint" attribute.
>>    - We could rely on the slow frames frames API, which we'll eventually
>>    need to spec to handle slow animations that aren't driven by input.
>>
>> If others think we should require exposing the time until the display is
>> updated before shipping this API, I'm happy to add it, but I anticipate it
>> slowing progress significantly and would rather ship a targeted initial API
>> and then iterate.
>>
>> On Mon, Mar 26, 2018 at 5:04 PM Todd Reifsteck <toddreif@microsoft.com>
>> wrote:
>>
>>> Mark,
>>>
>>> I’m not sure exactly how that feedback relates to Tim’s proposal. Can
>>> you give specific and direct feedback on Tim’s proposal if I’m not
>>> accurately
>>>
>>>
>>>
>>> My translation of the feedback being given on this specification:
>>>
>>>    - Could a site enable this at < 50 ms when using this for high speed
>>>    animations/games on well-tuned sites?
>>>       - Yes, but it may trigger a lot more often than many web sites
>>>       want to measure. Perhaps this could default to 50 ms, but allow a site to
>>>       opt in to < 50 ms for scenarios such as what Mark is describing?
>>>    - How do we measure the duration between the event updating the DOM
>>>    and the display actually showing it?
>>>       - Tim?
>>>
>>>
>>>
>>> With regard to the general statements made:
>>>
>>> Having the point data at much higher precision than 1/frame should allow
>>> all movement data to be used when processing. If the screen is only updated
>>> at 60 Hz, any updates to the actual graphics occurring more often than 60
>>> Hz can often cause “double resource usage” when animations are thrown away
>>> multiple times a frame. The purpose of allowing input callbacks to receive
>>> all input data 1/paint loop is to allow the “best of both worlds” by
>>> allowing the site to ensure the single paint uses all input data when
>>> calculating the UI updates. If this is not true in real world usage, I
>>> think the input and rendering teams would be interested to hear exactly how
>>> BUT that is not the focus of this specification. (Lets please start a new
>>> thread if that is a topic we’d like to discuss.)
>>>
>>>
>>>
>>> Hope that helps!
>>>
>>> Todd
>>>
>>>
>>>
>>> *From:* blurbusters@gmail.com <blurbusters@gmail.com> * On Behalf Of *Mark
>>> Rejhon
>>> *Sent:* Monday, March 26, 2018 1:47 PM
>>> *To:* Timothy Dresser <tdresser@chromium.org>
>>> *Cc:* Ilya Grigorik <igrigorik@google.com>; public-web-perf@w3.org
>>> *Subject:* Re: Minimal event timing proposal
>>>
>>>
>>>
>>> Regarding First Input Delay, we do lots of input latency measurements as
>>> Blur Busters, albiet from other points of views (e.g. gaming latency).
>>> Even 5ms makes a big difference in certain contexts.  Over the long term,
>>> 1000Hz mice should be streamed directly in an atomic manner to Javascript
>>> -- instead of always permanently only coalesced into PointerEvents API.
>>>
>>>
>>>
>>> Microsoft Resarch has found that realtime processing of 1000Hz input
>>> make a huge difference:
>>>
>>> https://www.youtube.com/watch?v=vOvQCPLkPt4
>>>
>>>
>>>
>>> Even on a 60Hz display it can still have large benefits.
>>>
>>>
>>>
>>> That said, understandably, while battery power can be a concern, Safari
>>> has been limiting lots of processing to 60Hz even on the new 120Hz iPads,
>>> certainly should be fleshed out better in all of this standardization
>>> work.  Many things like requestAnimationFrame() in Safari only runs at
>>> 60fps even on the 120Hz iPads, even though Section 7.1.4.2 of HTML 5.2
>>> recommends it running at full frame rates on higher-Hz displays (like in
>>> other browsers, Chrome, FireFox, Opera, and even some versions of Edge).
>>>  These add unwanted input lag to touchscreen events on Safari.   These are
>>> major input lag considerations, adding +8ms of input lag to browser apps
>>> rendered in <canvas> by running rAF() at 60fps instead of 120fps on the
>>> 120Hz iPads.   The display-side equation is part of the lag arithmetic too,
>>> even though there are also power-consumption-versus-lag tradeoffs.
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Mar 26, 2018 at 4:26 PM, Timothy Dresser <tdresser@chromium.org>
>>> wrote:
>>>
>>> I've updated the proposal on WICG here
>>> <https://github.com/WICG/event-timing/blob/master/README.md>.
>>> This proposal requires only dispatching PerformanceEventTiming entries
>>> if the duration between event startTime and when event processing is
>>> finished is > 50ms.
>>>
>>>
>>>
>>> Tim
>>>
>>> On Fri, Feb 16, 2018 at 1:43 PM Ilya Grigorik <igrigorik@google.com>
>>> wrote:
>>>
>>> Tim, thanks for drafting this! I like where this is headed.
>>>
>>>
>>>
>>> Left a few questions in the doc and added this to our agenda for next
>>> design call
>>> <https://docs.google.com/document/d/10dz_7QM5XCNsGeI63R864lF9gFqlqQD37B4q8Q46LMM/edit#heading=h.ljsbtcikd3cl> (date
>>> TBD).
>>>
>>>
>>>
>>> On Thu, Feb 15, 2018 at 1:53 PM, Timothy Dresser <tdresser@chromium.org>
>>> wrote:
>>>
>>> Based on our discussion on First Input Delay
>>> <https://docs.google.com/document/d/1Tnobrn4I8ObzreIztfah_BYnDkbx3_ZfJV5gj2nrYnY/edit> at
>>> the last WG meeting, I've put together a minimal proposal
>>> <https://docs.google.com/document/d/10CdRCrUQzQF1sk8uHmhEPG7F_jcZ2S3l9Zm40lp3qYk/edit#heading=h.fbdd8nwxr7v4> for
>>> an event timing API.
>>>
>>>
>>>
>>> The extensions to the DOM spec are fairly straight forward, and the API
>>> itself is pretty bare bones. The main question is whether or not
>>> dispatching an entry per DOM event is too expensive.
>>>
>>> If it is, we'll need to devise a method to only report a subset of
>>> events.
>>>
>>> I'd appreciate any feedback you have,
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>
>
Received on Tuesday, 27 March 2018 18:26:40 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 27 March 2018 18:26:40 UTC