[TF-DI] Thing API proposal (was RE: [TF-DI] Agenda and webex details - 24 Sept 2015 at 15:00 CEST)

Hi Louay,

> -----Original Message-----
> From: Hund, Johannes [mailto:johannes.hund@siemens.com]
> Sent: Thursday, 24 September, 2015 15:59
> To: Bassbouss, Louay <louay.bassbouss@fokus.fraunhofer.de>; public-wot-
> ig@w3.org
> Subject: AW: [TF-DI] Agenda and webex details - 24 Sept 2015 at 15:00 CEST
> 
> Hi Louay,
> 
> I greatly enjoyed the presentation about the API to discover things, explicitly
> also the proposal for a thing API that enables accessing properties and actions
> from browsers, which could be a good input for a generic client-side API.

Ditto :)

> 
> I would like to invite you to present this aspect also in the TF-AP call so we could
> continue the discussion that was started.
> 
> There were two points raised by Dave, which I would like to pick up:
> 
> (1) are the concepts generic enough so we could use them in different
> programming languages (e.g. describe them in an IDL)?
> (2) could we have simplifications for the asynchronous nature, such as hiding the
> promise?

Not raised during the call as I was busy scribing, but I have a couple of comments as well, so thought I'd share them here.

The Presentation API is currently limited to selecting only one second screen at a time. That is not a real problem as the main use cases considered only involve one second screen (at least one second screen at a time). However, I wonder whether discovery of a single Thing is also a common use case for connected objects.

For instance, looking at the Generic Sensor API [1] that was mentioned during the call, I see that the entry-point to that API is to retrieve and monitor *a list of sensors*. Although a Thing in the WoT case may be contain more than one sensor, I suppose that, most of the time, the mapping will be one Thing per physical object, as in "a light bulb". I see value in the ability to select and interact with a particular light bulb, but I also think that it might be useful to select "all the light bulbs in this room" for instance.

Would supporting the ability to select more than one Thing at a time be useful? Do you see what API changes could do the trick? (That feature could actually be useful for a future version of the Presentation API)

Also, although we're addicted to screens, the typical number of available displays for the Presentation API should remain pretty small in most contexts. There may be more Things to choose from, which might mean that the list could grow out of control. I suppose that the user agent could be smart enough to group things together but that would require additional logic on their side.

Or it may be that the light example is not a very good one. Requiring the user to select a light in a list just to be able to switch it on or off may not lead to the best user experience. The API may be much more useful to interact with more complex things: the user won't have many of them and selecting only one will be the default need. What do you think?

Thanks,
Francois.

[1] https://w3c.github.io/sensors/

Received on Thursday, 24 September 2015 16:16:54 UTC