Re: data streams

> On 3 Jun 2015, at 09:04, Magnus Olsson <magnus.olsson@ericsson.com> wrote:
> 
> Streams sounds like a very useful thing to support :-)
> 
> - For the multiple sinks, why not use a pub sub broker as the identified source of the stream that way any number of sinks can attach to that pub-source. Seems to me that setting a proxy output value to point to a single output is a limitation that might result in someone creating an upstream proxy to split to multiple sinks. 

This is a question of the underlying API and data structures. For files, for example, an open file handle can only be read by a single client. If you want several clients to read the stream, then they each need to open it individually so that they have their own stream handles/stream objects.

I completely agree that pub-sub protocols provide a good solution to streaming to multiple clients, which is why I advocate support for bindings to MQTT and XMPP (as examples of such protocols), but this is at a different level of abstraction.  We need to distinguish between the data model exposed to scripts and the bindings to the transport protocols, as it is important to decouple these to simplify scripting amongst other reasons.

> Any thing should be able to redirect any aspect of their source (defined by its meta description) to a nearby pub-sub proxy service. Still that could be an "on device" (localhost) broker, a home broker, a service provider broker or a public broker etc.

The Web of Things Framework allows you to have multiple proxies for the same thing, and you can define new things that transform streams etc.

> It seems reasonable to me to have use case such as one part of the stream for the snapshot, another for the live view a third for data analysis, a fourth for a remote tele medicine, a fifth for a emergency monitoring function etc.
> 
> - Even discrete value type of sensors could be handled as a stream if only given long enough time to observe

To keep simple use cases simple to script, we need a data model where values are simple object properties, e.g. a floating point number corresponding to a temperature reading. This will change over time as the corresponding physical value changes.  This is applicable where you are only interested in the current reading.

Note that you can define a data model where an event is generated to signal a change to a value. This raises the question of what constitutes a significant change, but that is something for applications to determine. Such events enable applications to implement their own history buffers.

A data logging application has different requirements, in that you want to record the history of changes along with the times that they occurred. This involves a different data model where you can ask for a the value at a specific time or for the sequence of changes in a given time interval. This is a common enough meme that it deserves to be supported by the server platform rather than requiring apps to implement logging and querying themselves.


—
   Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>

Received on Wednesday, 3 June 2015 09:25:22 UTC