Re: [sensors] Define processing model

The rationale for the proposed design to decouple the processing model from [animation frames] (via [crbug.com/715872]):

>Queuing an animation task every frame is making a large amount of the system churn even if nothing in the UI is changing in response to the sensor data. We generally consider an infinite raf loop where nothing is changing in the UI a bug in apps because it uses a lot of power. Creating any continuously polling sensor is pretty much the same in the current design.
>
>Note that the raf system is sending plenty of IPCs around the system every 60fps due to the BeginFrameSource and the way the browser/gpu process and renderer talk to each other. I'd really like to see data on the 2ms you're quoting, that's 2ms for what in which situations?
>
>Also the system you're coupled to currently (raf) is throttled for lots of reasons, for example third party iframes, off screen iframes, it doesn't run at all inside frames that are still loading, it doesn't run in background tabs, it'll block on the GPU being busy, ... So this means that if you insert a bunch of DOM into the page that takes 100ms to raster you're losing 100ms of sensor samples even if the main thread was idle.
>
>I think you want a separate system that schedules the tasks unless you want all those side effects for sensor events.
>
>I'm also curious to see an example sensor data stream and how important it is to get the changes in real time at 60fps. For lots of sensors (ex. ALS) getting it periodically and in batches is often enough. Never letting the app sleep because it creates an instance of the AmbientLightSensor or AccelerometerSensor might not be the best design. It's so easy to end up in a situation where your page is using infinite power.

As noted earlier, this proposed solution is applicable to any modern browser engine with a multi-process architecture, and is not a Chrome-specific optimization.

@tobie mentioned the WebVR example:

>Well, for WebVR, for example, there pretty much will be a redraw for every frame, so I'm not sure avoiding to cause redundant redraws is something we should optimize for here. Which is not to say it might not be an issue (as mentioned in crbug) for other use cases.

A WebVR implementation redraws a new frame at the native refresh rate of the VR display, and reads the latest available pose synchronously (note, there's no `onvrposechanged`). However, sensors have many use cases where redundant redraws are an issue, for example, any use case in which sensor data is collected and analyzed without visible changes to the UI would be inefficient to implement if coupled with animation frames.

In other words, a new sensor reading should not cause a side-effect of scheduling a new animation frame.

>Reducing latency, gathering data a high frequencies, etc., are important requirements for sensors. 

The proposed solution to decouple from animation frames reduces latency (because we're not synchronizing with animation frames, and can fire the event immediately) and allows polling at higher frequencies in comparison to animation frame coupling (because we're not limited by the refresh rate of the rendering pipeline). In addition, it improves power efficiency of implementations that is an important factor for battery-powered devices (because we're not forcing redundant redraws).

I consider the consensus position of the Chrome engineers and Generic Sensor API implementers in Chrome a strong case in favour of decoupling the processing model from animation frames.

[animation frames]: https://html.spec.whatwg.org/#animation-frames
[crbug.com/715872]: https://bugs.chromium.org/p/chromium/issues/detail?id=715872#c5

-- 
GitHub Notification of comment by anssiko
Please view or discuss this issue at https://github.com/w3c/sensors/issues/198#issuecomment-303382004 using your GitHub account

Received on Tuesday, 23 May 2017 12:28:00 UTC