Re: [sensors] Define processing model

> any use case in which sensor data is collected and analyzed without visible changes to the UI would be inefficient to implement if coupled with animation frames.

I absolutely understand that now and I don't think anyone disagrees with this. As mentioned above I'm mostly concerned about the following things:

1. Latency between polling the sensor and `requestAnimationFrame`. It's unclear to me whether this new design changes that latency or not. Your explanation above considers latency between polling the sensor and firing a "change" event being the key. It's not. The key is how that's tied to rAF. My question here is simple: does the design change here imply implementation changes that increase this latency, and if so by how much?
2. Will firing events at polling speed hinder our ability to increase the polling speed in the near future (my understanding from reading the [Oculus Rift paper](http://msl.cs.illinois.edu/~lavalle/papers/LavYerKatAnt14.pdf) on the subject is that the Rift polls its gyroscope at 1,000 Hz)?
3. Won't these rapidly firing events interrupt rendering and cause jank as @slightlyoff suggests in ttps://github.com/w3ctag/design-reviews/issues/115#issuecomment-236365671?
4. The previous architecture allowed us to increase polling frequency (to reduce latency) without providing more readings to the application layer. This was one of our mitigation strategies: https://w3c.github.io/sensors/#limit-number-of-delivered-readings. Do we no longer consider providing > 60 Hz frequency an issue or are we planning to handle this another way. If so, how?
5. Given the case where a sensor is polled at max frequency to lower latency, but only its latest current reading is used during rAF, isn't all the extra copying and data transfer wasteful (i.e. also costly in terms of battery, etc.)?

> and allows polling at higher frequencies in comparison to animation frame coupling (because we're not limited by the refresh rate of the rendering pipeline).

Sensor polling is actually totally decoupled from the pipeline either way, so I'm not sure I understand this. What's at stake here is how and when are the polled readings transferred to the application layer.

> I consider the consensus position of the Chrome engineers and Generic Sensor API implementers in Chrome a strong case in favour of decoupling the processing model from animation frames.

I will too once I hear clear answers to the questions above. Depending on these answers, I might also suggest investigating making rAF coupling opt-in (like initially suggested in #4).

-- 
GitHub Notification of comment by tobie
Please view or discuss this issue at https://github.com/w3c/sensors/issues/198#issuecomment-303401361 using your GitHub account

Received on Tuesday, 23 May 2017 13:41:40 UTC