AW: thing descriptions

Hi Dave,

I think, we can also rely on the existing mechanism to define (complex) data types. It is most likely, that the RDF data model will help us out there. I would like to discuss this in the next TD web meeting.

Best wishes
Sebastian

Von: Dave Raggett [mailto:dsr@w3.org]
Gesendet: Donnerstag, 9. Juli 2015 20:09
An: Kaebisch, Sebastian
Cc: public-wot-ig@w3.org
Betreff: Re: thing descriptions


On 9 Jul 2015, at 11:41, Kaebisch, Sebastian <sebastian.kaebisch@siemens.com<mailto:sebastian.kaebisch@siemens.com>> wrote:

In today’s call you introduced a simple temperature sensor with a location (room 10), units (celsius) and a representation (float).  This can be modelled as a thing property.  A potential description in JSON-LD could be:


{
          “@properties” : {
                      “sensor1” : {
                                 “@context” : “http://example.org/semantics”,

                                 “role” : “temperature-sensor”,
                                 “location” : “room 10”,
                                 “units” : “celsius”,
                                 “type” : “float”
                      }
          }
}


where the context is a URI for a resource that binds the names to URIs in specific Linked Data vocabularies.


W3C could define a default context that is bound to a MIME content type for thing descriptions in JSON-LD. This would have the advantage of reducing the overhead for communicating thing descriptions. This default context would define “@properties”, “role”, “location”, “units”, “type” and “float", whereas “temperature-sensor” and “celsius” would be defined by the example.org<http://example.org> context in the above example. Note that example.org<http://example.org> is a hypothetical example for discussion purposes.

The idea of the model is to have a more abstract understanding what we would see in the TD. Your concrete sample actually match so far perfect to the model that I have presented yesterday: You indicate both, the description for identification (sensor1, role / context, location) and data (type, unit, etc). However, some questions arises: How do you describe complex types (e.g., a set of 10 nested different values with different kind of types (int, float, date, etc.)). And how do you describe the parameters of a function of an actuator?

JSON-LD allows you to nest descriptions as needed to describe complex types. Essentially you use a nested JSON object to list metadata for a property, and this can include sub-properties via a nested JSON object, these properties can have JSON objects for their metadata and so forth. In principle, you can also reference external metadata to enable abbreviated object models. The server may need to download the external model in order to construct the virtual object for applications scripts. A really simple temperature sensor could perhaps be defined as follows:

{
          “@properties” : {
                      “@context” : “http://…”,

                “temperature” : “float"
          }
}

where the context binds “temperature” to  the semantics of a temperature reading in say “kelvin”.  RDF assumes the open world hypothesis, but sometimes it would be convenient to be able to override defaults.  It would be worth chatting to Semantic Web experts about how to approach that.

The same approach applies to the data that is passed with events, and to the parameters passed to an action and to its results (if any). My door example defines an unlock action with null, meaning the action expects no data. The same is true for the “bell” event, but the “key” expects a boolean argument named “valid”. In essence, JSON-LD allows you to provide a value as a string or when you want to provide richer annotations, as a JSON object (or array).

This core semantics enables servers to create local instances of virtual objects in the execution space of application scripts running on that server.  The server hides the details of the protocols, messages and encodings used to communicate with other web of things servers.  An app developer just needs to know the URI for the thing description and once the object has been instantiated, the script written by the developer can listen for events, read the thing’s properties and invoke the methods for the actions it supports.

This may be true if you only consider Thing to Server / Cloud / Browser scenario. However, we should also consider the Thing2Thing interaction and resource constrained devices (e.g., microcontrollers). E.g., how would a plug & play scenario look like if we have one lamp in a room and you will install a new switch in that room. What is required at the semantic level to orchestrate both tings automatically?

The model semantics remain the same. The semantic model for a switch could include a property whose value is a lamp. However, when the switch is installed it doesn’t yet have the “URI” for the lamp it will eventually be used to control. This can be set later, e.g. through an action exposed by the switch. The issue is thus more about discovery and binding. How does the lamp and switch become aware of each other? How does the switch become bound to that lamp?  What if you plug in several lamps?  The devices could be pre-paired in the factory, e.g. because they are sold as a pair.  In this case the switch could seek out the lamp directly. The devices could advertise themselves, and as the owner of a smart home, you could use your smart phone to pair the devices via the phone’s touch screen.

Best regards,
—
   Dave Raggett <dsr@w3.org<mailto:dsr@w3.org>>

Received on Friday, 10 July 2015 19:25:37 UTC