Generic Sensor API

W3C First Public Working Draft,

This version:
http://www.w3.org/TR/2015/WD-sensors-1-20151006/
Latest version:
http://www.w3.org/TR/sensors/
Editor's Draft:
https://w3c.github.io/sensors/
Version History:
https://github.com/w3c/sensors/commits/gh-pages/index.bs
Feedback:
public-device-apis@w3.org with subject line “[sensors] … message topic …” (archives)
Issue Tracking:
GitHub
Editors:
(Intel Corporation)
Rick Waldron (jQuery Foundation)
Bug Reports:
via the w3c/sensors repository on GitHub

Abstract

This specification defines a framework for exposing sensor data to the Open Web Platform in a consistent way. It does so by defining a blueprint for writing specifications of concrete sensors along with an abstract Sensor interface that can be extended to accommodate different sensor types.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document was published by the Device APIs Working Group as a Working Draft. This document is intended to become a W3C Recommendation.

If you wish to make comments regarding this document, please send them to public-device-apis@w3.org (subscribe, archives). When sending e-mail, please put the text “sensors” in the subject, preferably like this: “[sensors] …summary of comment…”. All comments are welcome.

This document is a First Public Working Draft.

Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 September 2015 W3C Process Document.

Table of Contents

1. Introduction

Increasingly, sensor data is used in application development to enable new use cases such a geolocation, counting steps or head-tracking. This is especially true on mobile devices where new sensors are added regularly. It is also increasinly common in networked objects which are part of the Internet of Things.

Exposing sensor data to the Web has so far been both slow-paced and ad-hoc. Few sensors are already exposed to the Web. When they are, it is often in ways that limit their possible use cases (for example by exposing abstractions that are too high-level and which don’t perform well enough). APIs also vary greatlty from one sensor to the next which increases the cognitive burden of Web application developers and slows development.

The goal of the Generic Sensor API is to promote consistency across sensor APIs, enable advanced use cases thanks to performant low-level APIs, and increase the pace at which new sensors can be exposed to the Web by simplifying the specification and implementation processes.

2. Terminology

A sensor measure different physical quantities and provide corresponding output data which is a source of information about the user and their environment.

Known, predictable discrepancies between sensor output data and the corresponding physical quantities being measured are corrected through calibration,

Known but unpredictable discrepancies need to be addressed dynamically through a process called sensor fusion.

Different sensor types measure different physical quantities such as temperature, air pressure, heart-rate, or luminosity.

For the purpose of this specification we distinguish between high-level and low-level sensor types.

Sensor types which are characterized by their implementation are refered to as low-level sensors. For example a Gyroscope is a low-level sensor type.

Sensors named after their output data, regarless of the implementation, are said to be high-level sensors. For instance, geolocation sensors provide information about the user’s location, but the precise means by which this data is obtained is purposefully left opaque (it could come from a GPS chip, network cell triangulation, wifi networks, etc. or any combination of the above) and depends on various, implementation-specific heuristics. High-level sensors are generally the fruits of applying algorithms to low-level sensors—for example, a podometer can be built using only the output of a gyroscope—or of sensor fusion.

That said, the distinction between high-level and low-level sensor types is somewhat arbitrary and the line between the two is often blurred. For instance, a barometer, which measures air pressure, would be considered low-level for most common purposes. As, even though it is the product of the sensor fusion of resistive piezo-electric pressure and temperature sensors, exposing the sensors that compose it would serve no practical purpose (who cares about the temperature of a piezo-electric sensor?). A pressure-altimeter would probably fall in the same category, while a non-descript altimeter—which could get its data from either a barometer or a GPS signal—would clearly be categorized as a high-level sensor type.

Because the distinction is somewhat blurry, extensions to this specification (see §7 Extensibility) are encouraged to provide domain-specific definitions of high-level and low-level sensors for the given sensor types they are targeting.

The output data of sensors can be combined with the output of other sensors through a process called sensor fusion. This process provides higher-level or more accurate data (often at the cost of increased latency). For example, the output of a three-axis magnetometer needs to be combined with the output of an accelerometer to provide a correct bearing. Sensor fusion can be carried out at either the hardware or software level.

Note: sensors created through sensor fusion are sometimes called virtual or synthetic sensors. However, the specification doesn’t make any practical differences between them, preferring instead to differentiate sensors as to whether they describe the kind of output data produced--these are high-level sensors—or how the sensor is implemented (low-level sensors).

TODO: add a section about reading from a sensor and how this is exposed as an asynchonous operation.

3. An note on Feature Detection of Hardware Features

This section is non-normative.

Feature detection is an established Web development best practice. Resources on the topic are plentiful on and offline and the purpose of this section is not to discuss it further, but rather to put it in the context of detecting hardware-dependent features.

Consider the below feature detection examples:

if (typeof Gyroscope === "function") {
    // run in circles...
}
        
if ("PromimitySensor" in window) {
    // watch out!
}
        
if (window.AmbientLightSensor) {
    // go dark...
}
        
// etc.

All of these tell you something about the presence and possible characteristics of an API. They do not tell you anything, however, about whether that API is actually connected to a real hardware sensor, whether that sensor works, if its still connected, or even whether the user is going to allow you to access it. Note you can check the latter using the Permissions API [permissions].

In an ideal world, information about the underlying status would be available upfront. The problem with this is twofold. First, getting this information out of the hardware is costly, in both performance and battery time, and would sit in the critical path. Secondly, the status of the underlying hardware can elvolve over time. The user can revoke permission, the connection to the sensor be severed, the operating system may decide to limit sensor usage below a certain battery threshold, etc.

Therefore, an effective strategy is to combine feature detection, which checks whether an API for the sought-after sensor actually exists, and defensive programming which includes:

  1. checking for error thrown when instantiating a Sensor object,

  2. listening to errors emitted by it,

  3. setting an appropriate timeout for your particular usecase,

  4. handling all of the above graciously so that the user’s experienced is enhanced by the possible usage of a sensor, not degraded by its absence.

if (typeof GeolocationSensor === "function") {
    try {
        let sensor = new GeolocationSensor({
            timeout: 3 * 1000 // 3 seconds
        });
        sensor.onerror = error => gracefullyDegrade(error);
        sensor.onchange = data => updatePosition(data.coords);
    } catch(error) {
        gracefullyDegrade(error);
    }
} else {
    gracefullyDegrade();
}

4. Model

This section is non-normative.

The Generic Sensor API is designed to make the most common use cases straightforward while still enabling more complex use cases.

Most devices deployed today do not carry more than one sensor of each type. This shouldn’t come as a surprise since use cases for more than a sensor of a given type are rare and generally limited to specific sensor types such as proximity sensors.

The API therefore makes it easy to interact with the device’s default (and often unique) sensor for each type simply by instantiating the corresponding Sensor subclass.

Indeed, without specific information identifying a particular sensor of a given type, the default sensor is chosen.

Listening to geolocation changes:
let sensor = new GeolocationSensor({ accuracy: "high" });

sensor.onchange = function(event) {
    var coords = [event.data.latitude, event.data.longitude];
    updateMap(null, coords, event.data.accuracy);
};

sensor.onerror = function(error) {
    updateMap(error);
};

Similarly, getting a single output data sample should be a simple process, and it is:

Geolocating the user:
GeolocationSensor.requestData({ accuracy: "high" })
    .then(reading => { displayCoords(reading.coords); })
    .catch(err => console.log(err));

Note: extension to this specification may choose not to define a default sensor when doing so wouldn’t make sense. For example, it might be difficult to agree on an obvious default sensor for proximity sensors.

In cases where multiple sensors of the same type may coexist on the same device, specification extension will have to define ways to uniquely identify each one.

For example checking the pressure of the left rear tire:
DirectTirePressureSensor.requestData({ position: "rear", side: "left" })
    .then(reading => { display(reading.pressure); })
    .catch(err => console.log(err));

4.1. Enumerating Sensors

Sensor discoverability becomes particularly important as the number of sensor grows. Web application developers need to find out what kind of sensor is on a given device and what their capabilities are.

This feature is at risk and might be moved to a subsequent version of this specification.

5. API

5.1. The Sensor Interface

A Sensor object has an associated sensor.

A Sensor object observes the changes in its associated sensor at regular intervals and reports those values by firing DOM events.

frequency is measured in hertz (Hz).

TODO: define the following concepts

5.1.1. Sensor Constructor

The Sensor() constructor must run these steps:

  1. If the incumbent settings object is not a secure context, then:

    1. throw a SecurityError.

  2. If sensorOptions.sensorId is specified, then:

    1. If there is a sensor identified by sensorOptions.sensorId, then

      1. let sensor be a new Sensor object.

      2. associate sensor with that sensor

    2. Otherwise, throw a TypeError.

  3. Otherwise, if identifying parameters in sensorOptions are set, then:

    1. If these identifying parameters allow a unique sensor to be identified, then:

      1. let sensor be a new Sensor object.

      2. associate sensor with that sensor

    2. Otherwise, throw a TypeError.

  4. Otherwise, if a default sensor exists for this sensor type:

    1. let sensor be a new Sensor object.

    2. associate that sensor with it.

  5. Otherwise, throw a TypeError.

  6. Set sensor’s reading attribute to null.

  7. return sensor.

  8. Run these substeps in parallel:

    1. If permission is not granted, queue a task to fire an event named error on sensor, and terminate these substeps.

    2. If cached SensorReadings are available,

      1. let latest_reading be the most recent of those SensorReadings.

      2. set the value of sensor’s reading attribute to latest_reading.

    3. run the read steps for sensor.

5.1.2. Sensor.frequency

The frequency at which the read steps are run.

Issue #4 on GitHub: “Provide a way of tying sensor requests to animation frames”

Copy pasted from @domenic's original issue rwaldron/sensors#5

For games and other animation frame loop-based situations (off the top of my head, accelerometer-based scrolling comes to mind) you want one value per animation frame.

Chrome is actually doing work specifically to integrate our scheduling of input events and animation frames, see Blink Scheduler design doc.

Off the top of my head, frequency: "animationframe" might be good. The contract being that, if you pass that in to the constructor, then given

requestAnimationFrame(function frame() {
  console.log(mySensor.value);
  requestAnimationFrame(frame);
});

you will get an updated value every frame.

I don't think frequency: 60 is quite the same thing (but I could be wrong; will try to tag in some Blink engineers) since animation frames often vary away from 60 Hz and so you will quickly get out of sync.

Posted to blink-dev: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fLucJ2QH3fA unsure whether people will follow-up here or there.

Proposed resolution: Enabling developers to tie the frequency of sensor updates to animation framerate is a desirable feature. Further research is needed to see whether or not that is implementable.

Actions:

5.1.3. Sensor.batch

true if batch mode was requested, false otherwise.

5.1.4. Sensor.info

Returns the related SensorInfo object.

5.1.5. Sensor.treshold

5.1.6. Sensor.timeout

5.1.7. Sensor.wakeup

5.1.8. Sensor.value

The value attribute of must always point to the latest SensorReading whatever the frequency so that the value attribute of two instances of the same Sensor interface associated with the same sensor hold the same SensorReading during a single event loop turn.

5.1.9. Sensor.values

Issue #13 on GitHub: “Allowing data batching when poll frequency < sensor frequency”

Even when doing realtime processing (eg. say at 60 Hz), there is a benefit to having more than one new value every time you process (eg. Oculus polls a gyroscope at 1000 Hz for head tracking). In this case, every time you process, you'll have [1000/60] ~= 16 new sensor data.

This seems like a desirable feature. How would you handle this though?

Is this (yet) an extra option you set? How does that work with sensors with multiple values, etc.?

/cc @borismus

Proposed resolutions: Some use cases have high data frequency requirements which might cause performance and/or memory constraints. Further research is needed to assess and determine how to satisfy those requirements.

Further actions:

5.1.10. Sensor.onerror

5.1.11. Sensor.ondata

5.1.12. Sensor.onchange

5.1.13. Sensor.oncalibration

5.1.14. Event handlers

The following are the event handlers (and their corresponding event handler event types) that MUST be supported as attributes by the objects implementing the Sensor interface:

event handler event handler event type
ondata data
onchange change
onerror error
oncalibration calibration

5.2. The SensorReading Interface

A SensorReading represents the state of a sensor at a given point in time.

interface SensorReading {
  readonly attribute DOMHighResTimeStamp timeStamp;
  readonly attribute SensorInfo info;
};

5.2.1. SensorReading.timeStamp

Returns a timestamp of the time at which the read steps was carried out expressed in milliseconds that passed since the time origin.

5.2.2. SensorReading.info

Returns the sensor object the reading is taken from.

5.3. The Sensors Interface

This feature is at risk.

The Sensors interface represents a container for a list of SensorInfo objects. It is exposed on Window and Workers as the Window.sensors and WorkerGlobalScope.sensors attribute respectively.

TODO: make this explicietly about local sensors and allow supporting enumeration of remote sensors through added parameters.

[Constructor, Exposed=(Window,Worker)]
interface Sensors {
  Promise<sequence<SensorInfo>> matchAll(optional MatchAllOptions options);
};

partial interface Window {
  [SameObject] readonly attribute Sensors sensors;
};

partial interface WorkerGlobalScope {
  [SameObject] readonly attribute Sensors sensors;
};

dictionary MatchAllOptions {
  DOMString type;
  boolean? remote = false;
};

5.3.1. Sensors.matchAll

Returns a promise which resolves to an array of SensorInfo objects representing all available local(?) sensors.

sensors.matchAll({ type: "proximity", position: "rear" }).then(function(sensors) {
    let sensor_info = sensors[0];
    if (!sensor_info) return;
    let sensor = new ProximitySensor({ sensorId: sensor_info.id });
    sensor.onchange = dostuff;
});

5.4. The SensorInfo Interface

This feature is at risk.

The SensorInfo interface is a lightweight object that represents an actual physical sensor. Concrete sensor implementation will need to subclass it.

[Constructor(optional DOMString id, optional SensorInit sensorInitDic), Exposed=(Window,Worker)]
interface SensorInfo {
    readonly attribute DOMString id;
    readonly attribute boolean isDefault;
};

dictionary SensorInit {
  boolean isDefault;
};

5.4.1. SensorInfo.id

Returns the id of the sensor. This is an opaque DOMString.

5.4.2. SensorInfo.isDefault

Returns true if the sensor is the default sensor of that type on the device, false otherwise.

6. Security and privacy considerations

TODO: gather general security and privacy risks common to all sensors.

6.1. Secure Context

Sensor data is explicitely flagged by the Secure Contexts specification [powerful-features] as a high-value target for network attackers. As such, sensor data should only be available within a secure context.

6.2. Obtaining Explicit User Permission

Issue #20 on GitHub: “Allow for async permission request when first accessing sensor data”

Make sure the API is compatible with sensors which need to ask the user's permission before being used.

Proposed resolutions:

Actions:

7. Extensibility

This section is non-normative.

Its purpose is to describe how this specification can be extended to speficy APIs for different sensor types.

Extension specifications are encouraged to focus on a single sensor type, exposing both high and low level as appropriate.

7.1. Naming

Sensor interfaces for low-level sensors should be named after their associated sensor. So for example, the interface associated with a gyroscope should be simply named Gyroscope. Sensor interfaces for high-level sensors should be named by combining the physical quantity the sensor measures with the "Sensor" suffix. For example, a sensor measuring the distance at which an object is from it may see its associated interface called ProximitySensor.

Attributes of the SensorReading subclass that hold output data should be named after the full name of this output data. For example, the TemperatureSensorReading interface should hold the value of the sensor’s output data in a temperature attribute (and not a value or temp attribute).

7.2. Exposing High-Level vs. Low-Level Sensors

So far, specifications exposing sensors to the Web platform have focused on high-level sensors APIs. [geolocation-API] [orientation-event]

This was a reaosnable approach for a number of reasons. Indeed, high-level sensors:

However, an increasing number of use cases such as virtual and augmented reality [sensor-use-cases] require low-level access to sensors, most notably for performance reasons.

Providing low-level access enables Web application developers to leverage domain-specific constraints and design more performant systems.

Following the precepts of the Extensible Web Manifesto [EXTENNNNSIBLE], extension specifications should focus primarily on exposing low-level sensor APIs, but should also expose high-level APIs when they are clear benefits in doing so.

7.3. When is Enabling Multiple Sensors of the Same Type Not the Right Choice?

TODO: provide guidance on when to:

7.4. Defining a default

TODO: provide guidance on how and when to set a default sensor.

7.5. Calibration

Output data emitted by Sensor objects should always be calibrated.

7.6. Extending the Permission API

Provide guidance on how to extend the Permission API [permissions] for each sensor types.

Issue #22 on GitHub: “Simplify extension of the Permissions API for concrete sensor implementations”

Accessing sensor data has multiple privacy implications and is therefore subject to permissioning. The Permission API allows developers to check whether the permission to access a given API was already granted (or denied) by the user on a per origin basis.

Currently, exposing that status through the Permissions API requires a modification of the Permission API spec itself. Ideally, we'd want sped editors authoring concrete sensor specs to be able to extend the Permission API within the same document.

This needs concertation with the editors of the Permissions API.

Actions:

7.7. Example WebIDL

Here’s example WebIDL for a possible extension of this specification for proximity sensors.

[Constructor(optional ProximitySensorOptions proximitySensorOptions), Exposed=(Window,Worker)]
interface ProximitySensor : Sensor {
  readonly attribute ProximitySensorReading? value;
  readonly attribute ProximitySensorReading[]? values;
};

interface ProximitySensorReading : SensorReading {
    readonly attribute unrestricted double distance;
};

dictionary ProximitySensorOptions : SensorOptions {
    double? min = -Infinity;
    double? max = Infinity;
    ProximitySensorPosition? position;
    ProximitySensorDirection? direction;
};
    
enum ProximitySensorPosition {
    "top-left",
    "top",
    "top-right",
    "middle-left",
    "middle",
    "middle-right",
    "bottom-left",
    "bottom",
    "bottom-right"
};

enum ProximitySensorDirection {
    "front",
    "rear",
    "left",
    "right",
    "top",
    "bottom"
};

8. Acknowledgements

The following people have greatly contributed to this specification through extensive discussions on GitHub: Anssi Kostiainen, Boris Smus, Claes Nilsson, Dave Raggett, davidmarkclements, Domenic Denicola, Dominique Hazael-Massieux, fhirsch, Francesco Iovine, gmandyam, Jafar Husain, Johannes Hund, Kris Kowal, Marcos Caceres, Mats Wichmann, Matthew Podwysocki, pablochacin, Remy Sharp, Rich Tibbett, Rick Waldron, Rijubrata Bhaumik, robman, Sean T. McBeth, smaug----, and zenparsing.

We’d also like to thank Anssi Kostiainen,Erik Wilde, and Michael[tm] Smith for their editorial input.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words "for example" or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Because this document doesn’t itself define APIs for specific sensor typesthat is the role of extensions to this specification—all examples are inevitably (wishful) fabrications. Although all of the sensors used a examples would be great candidates for building atop the Generic Sensor API, their inclusion in this document does not imply that the relevant Working Groups are planning to do so.

Informative notes begin with the word "Note" and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Conformance Classes

A conformant user agent must implement all the requirements listed in this specification that are applicable to user agents.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[HTML]
Ian Hickson. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[WebIDL]
Cameron McCormack; Boris Zbarsky. WebIDL Level 1. 4 August 2015. WD. URL: http://www.w3.org/TR/WebIDL-1/
[DOM]
Anne van Kesteren; et al. W3C DOM4. 18 June 2015. LCWD. URL: http://www.w3.org/TR/dom/
[HR-TIME-2]
Ilya Grigorik; James Simonsen; Jatinder Mann. High Resolution Time Level 2. 18 September 2015. WD. URL: http://www.w3.org/TR/hr-time-2/
[HTML5]
Ian Hickson; et al. HTML5. 28 October 2014. REC. URL: http://www.w3.org/TR/html5/
[POWERFUL-FEATURES]
Mike West; Yan Zhu. Privileged Contexts. 24 April 2015. WD. URL: http://www.w3.org/TR/powerful-features/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119

Informative References

[EXTENNNNSIBLE]
The Extensible Web Manifesto. June 10, 2013. URL: https://extensiblewebmanifesto.org/
[geolocation-API]
Andrei Popescu. Geolocation API Specification. 28 May 2015. PER. URL: http://www.w3.org/TR/geolocation-API/
[ORIENTATION-EVENT]
Stephen Block; Andrei Popescu. DeviceOrientation Event Specification. 1 December 2011. LCWD. URL: http://www.w3.org/TR/orientation-event/
[PERMISSIONS]
Mounir Lamouri; Marcos Caceres. The Permissions API. 7 April 2015. WD. URL: http://www.w3.org/TR/permissions/
[SENSOR-USE-CASES]
Tobie Langel. Generic Sensor Use Cases. October 5, 2015. ED. URL: http://w3c.github.io/sensors/usecases.html

IDL Index

[Constructor(optional SensorOptions sensorOptions), Exposed=(Window,Worker)]
interface Sensor : EventTarget {
  static Promise<SensorReading> requestData(RequestDataOptions requestDataOptions);
  attribute double frequency;
  attribute boolean batch;
  readonly attribute SensorInfo info;
  attribute TresholdCallback? treshold; 
  attribute double timeout; 
  attribute boolean wakeup; 
  readonly attribute SensorReading? value;
  readonly attribute SensorReading[]? values;
  attribute EventHandler onerror;
  attribute EventHandler ondata;
  attribute EventHandler onchange;
  attribute EventHandler oncalibration;
};

dictionary SensorOptions {
  DOMString? sensorId;
  double? frequency;
  boolean? batch = false;
  TresholdCallback? treshold;
  double? timeout;
};

dictionary RequestDataOptions {
  DOMString? sensorId;
  double? frequency;
  boolean? batch = false;
  double? timeout;
  boolean? fromCache = false;
};

callback TresholdCallback = boolean (SensorReading currentValue, SensorReading newValue);

interface SensorReading {
  readonly attribute DOMHighResTimeStamp timeStamp;
  readonly attribute SensorInfo info;
};

[Constructor, Exposed=(Window,Worker)]
interface Sensors {
  Promise<sequence<SensorInfo>> matchAll(optional MatchAllOptions options);
};

partial interface Window {
  [SameObject] readonly attribute Sensors sensors;
};

partial interface WorkerGlobalScope {
  [SameObject] readonly attribute Sensors sensors;
};

dictionary MatchAllOptions {
  DOMString type;
  boolean? remote = false;
};

[Constructor(optional DOMString id, optional SensorInit sensorInitDic), Exposed=(Window,Worker)]
interface SensorInfo {
    readonly attribute DOMString id;
    readonly attribute boolean isDefault;
};

dictionary SensorInit {
  boolean isDefault;
};