Re: Thing Description for existing data sources

To enable interoperability across platforms we need standards for a variety of metadata:

the data and interaction models exposed to applications
the information needed to access a thing on a given platform along with the communication patterns
the meaning of the thing in terms of its semantic model and constraints and relationships to other things
security and privacy related information, e.g. who can access it for what purposes

The data and interaction models should be expressed in ways that are neutral in respect to application languages, e.g. C++, JavaScript, Java, Python, state machine languages and so forth. Formalising the data models in terms of JSON is thus a bad idea in my view. I further believe that the metadata terms for data models should be grounded in RDF as this can be decoupled from the serialisation formats for metadata, and provides for globally unique identifiers that can be de-referenced for further information.

The jury is out on the best ways to describe the communications metadata. This needs to evolve as we experiment with different platforms.

For the semantic models, I see a dimension from lightweight models to more complex models based on OWL ontologies.  For the lightweight end, which I expect to be very successful, I am looking at how we can create really agile standardisation processes that reflect the evolving maturity of a given set of terms. Schema.org provides a useful precedent. This still needs to be grounded in RDF.

I am still wondering about the options for security and privacy related metadata. I am giving an invited talk on this tomorrow in Berlin as part of the IoT Tech Expo, and plan a blog post as a follow up.

> On 13 Jun 2016, at 16:20, Kovatsch, Matthias <matthias.kovatsch@siemens.com> wrote:
> 
> There is no problem with Web linking techniques. I wanted to express the following clash I see:
>  
> On the one hand, I see a type system that defines a data structures from primitive types (a technique that I would associate with RPC-style distributed systems). It lacks the possibility to describe individual elements semantically. If two applications have the same elements, but in a different structure, I would expect that a WoT machine can automatically convert structure A into its own structure B – or in other terms, can understand the meaning of each individual element in structure A and map it to its internal data model.
>  
> On the other hand, I see semantic annotations that define meaning of an interaction or resource. A machine can learn what elements are important for the interaction at an information model level, but not how to serialize it. With the current type system, we can only describe how anonymous elements are structured. The explicit definition of which information-model element must go where in the structure needs to be defined in parallel to the strucuture.
>  
> Those two definitions need to be unified.
>  
> Personally, I see representation format definitions doing that (e.g., SenML defines structure and semantics in one place). However, the definitions are not machine-understandable, not even machine-readable. Thus my comment, that we should think about this “something completely new” that enables us to define representation formats in a machine-understandable way (i.e., structure and semantics of elements in one place). This would also enable the pluggable approach the Web has been relying on to be evolved: a fixed well-designed core and plugins that help with the application problems of a specific time (cf. pre-Web 2.0 vs post-Web 2.0). The type system is currently overburdening the TD. We need to divide-and-conquer…
>  

—
   Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>

Received on Monday, 13 June 2016 19:15:30 UTC