Re: [tf-td] WoT metadata

> On 2 Oct 2015, at 22:53, Michael Koster <michaeljohnkoster@gmail.com> wrote:
> 
> @Dave It might be good to have some focused discussion on the data model + representation topic as it relates to Thing Description. I would be interested in the alternatives you mention. How does it impact the data model as exposed to the scripting API the developer will see?



My premise is that lowering the cost of IoT devices, broadens the range of business opportunities. It therefore makes sense to explore how far we can go in the direction of low end devices. Consider a simple IoT device running a minimal Web of Things server.  What does this server need to do?  I think the minimum requirement involves using the thing data model to decouple application scripts from the choice of the transport protocol and messaging schemes.  There is a broad range of possibilities for application scripts, e.g. dynamic scripting languages such as Lua and micro-python, static programming languages like Java and C++, and specialised state machine languages, such as for industrial control systems.

The API exposed by the server needs to provide a means to bind a data model to an implementation for the thing’s behaviour. The API enables implementations to raise events on things, update their properties and invoke their actions. If the application script updates a property value, then the server automatically deals with any messages that need to be exchanged to synchronise thing proxies. Likewise, if a proxy invokes an action, the server will call the corresponding method exposed by the thing’s implementation.

To allow low end devices to consume data from other devices, or to control services exposed by other devices, we also need a server API that allows an application to set up a proxy for a thing on another device. This API passes the URI for that thing’s description, and the server is responsible for accessing its data model and choosing the protocol and messaging scheme as appropriate to communicate with the other server that hosts the thing to be proxied.

A low end device may only support one protocol and messaging scheme.  I believe that we need a simple standard way to discover what those are. This of course gets us into the general discussion of discovery, e.g. servers could be discovered through the data models of things that reference things on other servers, analogous to discovering an HTTP server by coming across a link to it in an HTML file. Servers could register themselves with an external (and easy to find) registry. Servers could also be discovered through the use of local network discovery protocols .

Regardless of which way a given server was discovered, once you know about it, you will have some information as to the IP address, port and protocol to contact it with. You can then ask the server for more details.  Alternatively, these details may have already been recorded with a registry. In my experiments, I have explored the use of simple JSON based formats for these details. You don’t need to explicitly state the full URI for each “thing”, and instead, in many cases a base URI is sufficient, and can be combined with a “name” supplied as part of the thing’s data model.

For a low end Web of Things server, we want to minimise the memory footprint and the processing requirements.  A more powerful server can additionally monitor and validate messages against the data model, and perform rich semantic operations, e.g. to for service discovery and composition, to ensure that compatible versions of software are being used in a distributed system where different components are running on different platforms and from different vendors.

One way this impacts the data model serialisation is whether it is necessary to process the models into a set of RDF triples and operate on those, or whether this can be avoided through direct access to a hierarchical data model. I have explored the latter approach.  A minimal server needs to know the names of the events, properties and actions, but it doesn’t need to know the URIs for each RDF node. This avoids the need for minimal servers to download JSON-LD contexts that map short names to RDF URIs. This assumes that the short names are sufficient for the mapping to the messages needed for a particular choice of protocol and messaging scheme. A desirable feature is the ability to process the data model in a streaming mode so as to minimise the memory needs.

In my experimental work, I further avoid the need for a minimal server to retain a dictionary for the names of events, properties and actions. This relies on a deterministic algorithm for mapping these names to numeric identifiers. When a proxy server and a thing server communicate they use the same identifiers, and this enables more compact messages as well as reducing the server memory footprint. This points the way to more efficient message encoding standards, akin to EXI for XML.

In some applications that are less price sensitive, other considerations may be more important. The use of XML may be required where companies have a long standing investment in XML tool chains. We need to broaden the range of companies and application domains represented in the W3C Web of Things Interest Group, to ensure that we have a better picture of the varying requirements.  Another and complementary approach would be to reach out to external groups and to gain implementation experience with a variety of techniques.  What would it take to enable the Web of Things to bridge to IoT platforms designed for M2M, OIC and AllJoyn, amongst others?

—
   Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>

Received on Sunday, 4 October 2015 12:47:36 UTC