Re: [sensors] Possibility to use "Readable Stream" as defined by the Streams API

> A few comments post facts. This API deliberately exposes high level 
sensors only. So in general it is fine to apply algorithms on the 
data.

So the spec is careful to specify what it means by low and high level 
and explicitly allies exposing both.

> When it comes to pressure, usually it is fine to lose real-time data
 if the client cannot handle it, but the spec needs to be clear about 
the requirements concerning UA resources.

Agreed.

> Depending on the type of data there are flow control techniques to 
comply with the limitations of the client, much like adaptive codecs 
are doing to handle network limitations.
> In general, a consumer (client) should be able to specify a desired 
maximum sampling rate, a buffer size, and a policy. We want the UA 
take care of the rest, and just provide a notification mechanism for 
available data.

Here again, agreed. I'm unsure how to allow for the consumer of the 
API to specify a policy at present and I'm considering to just default
 to dropping the oldest reading once the buffer is filled. 

> The preferred policy tells what set of means is available for the 
sender to control. 
> For instance with EKG sampling policy, you are fine with losing 
resolution, if the range (maximum and minimum value samples) is 
preserved.

Do we have use cases for such policies? Examples in other platforms?

> In other cases, any samples can be removed, even full chunks, at it 
is more important to remain in sync than to receive all real-time 
data.

Yeah. Generally this seems to be the case for the use cases explored 
so far. 

> For buffering, one policy could be to just slide the buffer forward 
in time by one chunk. Another is to add a mechanism to preserve the 
first sample and drop more and more samples in the given window, 
making resolution less and less, until a given threshold when the 
buffer slides. If not supported by HW, this can be implemented with 
multiple buffers interleaved on reads, and dropping them sequentially 
(until slide).

Given sensors are mostly polled, this shouldn't be an issue (jut poll 
les frequently). Push sensors are smarter and generally push on change
 beyond a consumer-set table threshold, so here again shouldn't be an 
issue. 

> When it comes to clients expressing preferences, the server should 
reply with the ones it could meet (the usual negotiation).

Yes. And this can also vary during the lifecycle of the sensor. 

> To me it seems the current spec addresses the frequency and 
(indirectly) the buffer issues, and could add sampling and buffering 
policies support later.

Yeah, the buffering needs to be clarified. The sampling policy still 
needs to account for cases where the sampling is faster than what the 
consumer is able to cope with (in order to avoid being wasteful with 
resources). A default buffering policy should be specified (overwrite 
oldest data), and made configurable later on if needed. 

> Since all the above are meant to relax the pressure of streams, I 
agree with Tobie's conclusion to not jump to streams yet.

Good. :)

-- 
GitHub Notif of comment by tobie
See https://github.com/w3c/sensors/issues/70#issuecomment-156090638

Received on Thursday, 12 November 2015 12:22:33 UTC