- From: Tobie Langel <tobie@sensors.codespeaks.com>
- Date: Tue, 06 Dec 2016 14:14:29 +0100
- To: public-device-apis@w3.org
Hi Maksims, Thanks a lot for your email. See my comments inline. On Thu, Nov 24, 2016, at 19:11, Maksims Mihejevs wrote: > I've been watching development process and discussions regarding > Sensor API for some time now, and have few concerns. > Coming from WebGL/WebVR and real-time applications area, many of users > within that area have to access and communicate to sensors in fast and > efficient way. > Where applications are targeting 60fps, and even higher in the future > with VR requirements, it is very important that those APIs are real- > time friendly. Low-latency has been one of our core concerns from the get go and VR has been the main use case we've used to drive requirements around performance. > *Garbage Collection* - one of the major bottlenecks and high > consideration for real-time applications. If application is targeting > 60+ fps, it has to avoid any allocations and avoid calling any native > JS methods that allocate response objects, for example > getBoundingClientRect. > High GC loops do lead to periodical JS thread stalls, that can drop > many frames, making 60fps unreachable. That's a very good point and one that we hadn't considered. It's great that you bring it up before the design is finalized. Filed an issue for it here: https://github.com/w3c/sensors/issues/153 > *State / Async* - in real-time nature of development, async code does > not fit fixed update loop paradigm. Where application is targeting to > get fixed FPS (30/60), it requires to have access to data in that > update method, most of the time it is within requestAnimationFrame. So > for example when mouse interaction is required, due to old design of > mouse API, developers require to subscribe to mouse events, and then > store mouse coordinates in accessible scope, so then within update > method they can access mouse data. This creates big mess when there > are many things needs to be subscribed: orientation, VRDisplay > pulling, window resizing/orientation, keyboard states, motion, touch, > gamepads, and many-many others. > APIs that allow simply accessing latest state of whatever sensor or > data it is - are way, way easier to use, and do not require too much > of thinking about events, callbacks, etc. I think we got that (mostly) right. > *Promises* - as history shows, they are getting obsolete as ES6/7 > solves much better what promises promised to solve - callback hell > (which is doesn't solve). More to say: they are extremely bad for GC, > as they do allocate a state data internally, passing objects around, > as well as enforce function allocations, and lead to huge overload > with unnecessary function scopes > complicates GC. Debugging is not > good as well. Promises are good for one shot methods (get a single value and that's it), we'll only use them as such and only once we actually design a one shot API. > *So my question:* > 1. What are the guidelines this APIs are designed by? Inspiration from existing APIs (Android, iOS, J5) + Web constraints and API style. > 2. If GC is high consideration? It hasn't been. It should be. Fixing this. > 3. What are the highest priorities that drive the APIs design? There are plenty: - perf/low latency - security/privacy - devX - API consistency - fix known issues in previous designs - make it easy to spec and implement new sensors - feature-parity with native - expose low-level primitives as per Extensible Web Manifesto > 4. What are User Stories this APIs are engaged against in design > process? See: https://w3c.github.io/sensors/usecases.html (rough, not recently updated). > Looking forward to hear from you guys, and collaborate on making > design to be a good fit for everyone, including real-time > applications. Likewise. Thanks for your timely and super valuable input. --tobie
Received on Tuesday, 6 December 2016 13:15:17 UTC