Re: [sensors] Should the API allow setting both samplingFrequency and reportingFrequency?

When registering a listener, the Android API allows defining:

- samplingPeriodUs: the period between each sample taken and stored on the hardware
- maxReportLatencyUs: the period between each sensor change event emitted for the samples on the hardware.

Since the Android API and HAL can be fine tuned for some minimum viable hardware expectation across all supported devices, then it can offer such optional properties to be defined in application code. Similarly, iOS puts limitations on sensor interaction, which we can safely say is based on their own hardware expectations. Web applications that are expected to run in any browser across both Android, iOS, all the desktops, all the laptops and whatever else there, won't have such affordances. An application that sets a `sampleFrequency` can never actually know if the device it's running on is actually capable of that `sampleFrequency`, which means no algorithm could be written that expects reliability from `sampleFrequency`. `reportingFrequency` is different, as we can demand that whatever value is available is reported at whatever frequency our application demands. 

This strikes me as one of many instance where the goals of this specification are misunderstood. I understand that there are groups that are implementing generic sensor and its derivatives in non-browser JS runtimes. I also understand those implementations might benefit from `sampleFrequency`, but they should simply be extended to support the additional needs they encounter. The goals of this specification and the charter of this device group is to create sensor APIs for web applications, which means it must find the _intersection_ of capabilities and expose those meaningfully. 

-- 
GitHub Notification of comment by rwaldron
Please view or discuss this issue at https://github.com/w3c/sensors/issues/209#issuecomment-305598188 using your GitHub account

Received on Thursday, 1 June 2017 19:37:33 UTC