- From: Jonas Sicking <jonas@sicking.cc>
- Date: Tue, 6 Mar 2012 03:46:47 -0800
- To: Doug Turner <doug.turner@gmail.com>
- Cc: public-device-apis@w3.org
On Fri, Mar 2, 2012 at 11:12 AM, Doug Turner <doug.turner@gmail.com> wrote: > Hi Jonas, Tran, > > I am concerned that there is overlap between the Sensor Api and the DeviceOrientaiton Spec: > http://dev.w3.org/geo/api/spec-source-orientation.html > > I worry that web developers will not know which API to use. Tran, have you looked at our draft specification? I am wondering how we can make these two efforts result in something that is consistent and pleasing to developers. There's two problems that I see with the current DeviceOrientation drafts which the SensorAPI solves, though only one is major. My understanding was that was benefit in being able to control how often the sensor should be queried in order to reduce battery consumption. For something like a game you might not care about battery consumption a lot, but you do want very exact movement detection. But for something like a star-map application, you might be ok with a couple of times a second and are much more concerned about battery consumption. This could be fixed in the DeviceOrientation spec by introducing some global attribute which lets the page control how often the sensor should be checked. However that means if there are several different independent subsystems running on the page, they'll fight over control of that one global property. The only way to really solve this problem is to have an API similar to the Sensor API. I.e. a separate object instantiated where events are fired and where you have control over timing of when that event is fired. The need to control how often a sensor is checked is something that I fairly recently learned about, so I don't yet fully understand all the aspects of it. If it's not correct at all it's more or less free to get data at maximum speed as soon as the sensor is turned on, then that might indeed change things. The second is really a much simpler issue. I've gotten feedback, in particular from game developers, that event-based APIs for input data is generally the wrong model. What games generally want to be able to do is to check "what's the state of the input controls right now" rather than "tell me when the controls change". That way it's much easier to integrate the input state into the normal game loop. What developers end up doing is to have a simple event handler for input-state-changes which simply set a variable somewhere and then that variable is checked in the normal game loop. This isn't a big deal at all but it would be nice to avoid all developers having to jump through this hoop. When designing the Joystick API we took this into account so the current API makes the current state available as properties, and then fires an event when the state changes. This would be easy to fix in the DeviceOrientation API by simply exposing the current orientation/acceleration as a property somewhere (note that this shouldn't synchronously poll the sensor, it would simply cache the result of the latest firing event). > Answer to questions Jonas raised: > >> I'm having trouble understanding the difference between the OrientationData and AccelerationData sensors. > > They are measuring different things. Orientation has to do with rotation about an axis measured by a gyro. Acceleration will tell you change in speed measured by an accelerometer. That's the Gyroscope sensor. The OrientationData sensor provides the device's current orientation in relation to the direction of gravity. >> Could someone explain the meaning of the alpha, beta and gamma values? > > The DeviceOrientation spec goes into great detail about what these values are. If the Sensor API intends to use the same meaning as what's defined in the DeviceOrientation spec, it seems like the alpha component will always be 0. / Jonas
Received on Tuesday, 6 March 2012 11:47:50 UTC