- From: BELLESSORT Romain <Romain.Bellessort@crf.canon.fr>
- Date: Tue, 13 Oct 2015 16:49:16 +0000
- To: Dave Raggett <dsr@w3.org>
- CC: "Bassbouss, Louay" <louay.bassbouss@fokus.fraunhofer.de>, "fd@w3.org" <fd@w3.org>, "Hund, Johannes" <johannes.hund@siemens.com>, "public-wot-ig@w3.org" <public-wot-ig@w3.org>, FABLET Youenn <Youenn.Fablet@crf.canon.fr>
Hi Dave, I totally understand the need for WoT model and the benefits of its flexibility. What I wanted to stress is just that the proposed Thing API could rely on a more generic API which would address use cases not covered by WoT. As you mention, some of these use cases could be covered by WoT (e.g. by adding a discovery agent that can use UPnP data to construct metadata), but they don’t necessarily need something as sophisticated. For instance, a web app may be built to interact with only one specific type of service: in this case, it does not benefit from the flexibility of WoT model, all that it needs is a discovery/communication API (i.e. a more generic version of Thing API). As a side note, I believe that a generic discovery/communication API would also be very useful for WoT. Indeed, in addition to providing a means which is currently lacking (as would Thing API), it would also allow developers to keep experimenting and adding custom features if needed thanks to its extensibility. Again, my email was not aiming at questioning WoT model, but simply at providing a feedback regarding the Thing API proposal. Best regards, Romain. > -----Original Message----- > From: Dave Raggett [mailto:dsr@w3.org] > Sent: lundi 12 octobre 2015 20:38 > To: BELLESSORT Romain > Cc: Bassbouss, Louay; fd@w3.org; Hund, Johannes; public-wot-ig@w3.org; > FABLET Youenn > Subject: Re: [TF-DI] Thing API proposal (was RE: [TF-DI] Agenda and webex > details - 24 Sept 2015 at 15:00 CEST) > > Hi Romain, > > The aim of the Web of Things is to simplifying scripting by decoupling details > of protocols and messaging formats. The models are essential to this by > enabling servers to create the scriptable objects that act for sensors, > actuators as well as abstract entities. The models allow the server to figure > out what kinds of messages are needed to wire up the software objects in > the script execution space to the the protocol drivers. Some IoT devices may > use CoAP, others MQTT, and some might use HTTP, Web Sockets or XMPP. > Other IoT devices could use IoT communications technologies like ZigBee, or > EnOcean, and require a gateway to expose them to the Web of Things. > > When you say discovery based upon device type, this is where the semantic > descriptions of things comes in as we will need a common vocabulary for > talking about things of the same type. For interoperability, we need to know > what the data model is so that we can transform the data types as necessary > e.g. transforming temperature units, or video formats. We need metadata > for the protocols and data formats a given device supports so that we can > talk to it using the protocols it understands. Likewise, we need security > metadata so that we know what its requirements are for authentication and > encryption etc. > > I am probably misunderstanding you. When a Web app would like to interact > with a device/service, it would need to learn about it in some way. What kind > of device it is, what interfaces it supports, what protocols and data formats it > supports and so forth. This metadata could be provided directly by the > device, or indirectly, from another source. By abstracting discovery from the > underlying mechanisms, we can provide considerable flexibility in how this > can be realised. Perhaps a discovery agent can use the UPnP data to > construct the metadata or to look it up somehow, e.g. via a query on a cloud > service. We want to avoid the service logic needing to know the details of the > UPnP XML formats and identifiers. > > Best regards, > Dave > > On 12 Oct 2015, at 18:05, BELLESSORT Romain > <Romain.Bellessort@crf.canon.fr> wrote: > > Hi Louay, > > Thanks for your response. By other contexts, I basically mean all the cases > where a web app would like to interact with a remote device/service but > without having to rely on WoT model. > > This may especially be the case when interacting with local devices that can > run servers, such as network cameras, TVs, copiers... Services running on > such devices can be discovered based on their types (e.g. UPnP service > types), said types also defining how to interact with selected services. > Therefore, in this context, relying on WoT model and WoT-customized API is > not necessary. More generally, not all services may want to provide a Thing > Description, just like not all web services provide a WSDL description. A > generic API would allow addressing such cases. > > Of course, it would be possible to define a generic API on top of Thing API > (e.g. by defining default "sendMessage" action and "onmessage" event), but > it seems more logical to define a specialized API over a generic API than the > other way around. > > Regards, > > Romain. > > > -----Original Message----- > From: Bassbouss, Louay [mailto:louay.bassbouss@fokus.fraunhofer.de] > Sent: vendredi 9 octobre 2015 18:22 > To: BELLESSORT Romain; 'fd@w3.org'; 'Hund, Johannes'; public-wot- > ig@w3.org > Cc: FABLET Youenn > Subject: RE: [TF-DI] Thing API proposal (was RE: [TF-DI] Agenda and webex > details - 24 Sept 2015 at 15:00 CEST) > > Hi Romain, > > Thx for your feedback, I saw your Mail on the Presentation API CG ML. As > you said, the Thing API proposal follows some Ideas of the Presentation API > especially for abstracting from Discovery and Communication technologies as > well as showing Selection Dialog for the user (Select Displays vs Select > Things). I agree with you that we can make a more generic API for > communication that can be customized to support WoT TD and other > contexts. Do you have use cases that covers "other contexts"? > > regards > Louay > ________________________________________ > From: BELLESSORT Romain [Romain.Bellessort@crf.canon.fr] > Sent: Friday, October 09, 2015 1:12 PM > To: Bassbouss, Louay; 'fd@w3.org'; 'Hund, Johannes'; public-wot-ig@w3.org > Cc: FABLET Youenn > Subject: RE: [TF-DI] Thing API proposal (was RE: [TF-DI] Agenda and webex > details - 24 Sept 2015 at 15:00 CEST) > > Hi Louay, > > Thanks for this interesting proposal. We have been following both > Presentation API and WoT IG, and we agree that Presentation API may > provide a good model for a broader discovery and interaction API (we have > recently proposed that Second Screen CG may work on such an API [1], but > of course we would also be interested if things were to happen in another > group). > > In addition to comments regarding the number of devices (selection, > filtering), which have already been addressed, I was wondering about the > scope of this API. Your proposal is somehow specific to WoT model as it relies > on property/action/event. This perfectly makes sense in WoT context, but > have you investigated the opportunity for simply defining how to > send/receive messages, as in Presentation API? Such an API could then be > customized to obtain something like the Thing API in WoT context, or > another API in another context. Maybe your implementation builds over > such a generic message exchange primitive? > > Regards, > > Romain. > > [1] https://lists.w3.org/Archives/Public/public- > webscreens/2015Oct/0000.html > > > -----Original Message----- > From: Bassbouss, Louay [mailto:louay.bassbouss@fokus.fraunhofer.de] > Sent: vendredi 25 septembre 2015 12:59 > To: 'fd@w3.org'; 'Hund, Johannes'; public-wot-ig@w3.org > Subject: AW: [TF-DI] Thing API proposal (was RE: [TF-DI] Agenda and webex > details - 24 Sept 2015 at 15:00 CEST) > > Hi Francois, > > Thx a lot for your feedback ;) please find my comments inline. > > Louay > > > > Not raised during the call as I was busy scribing, but I have a couple of > comments as well, so thought I'd share them here. > > The Presentation API is currently limited to selecting only one second screen > at a time. That is not a real problem as the main use cases considered only > involve one second screen (at least one second screen at a time). However, I > wonder whether discovery of a single Thing is also a common use case for > connected objects. > > For instance, looking at the Generic Sensor API [1] that was mentioned > during the call, I see that the entry-point to that API is to retrieve and > monitor *a list of sensors*. Although a Thing in the WoT case may be contain > more than one sensor, I suppose that, most of the time, the mapping will be > one Thing per physical object, as in "a light bulb". I see value in the ability to > select and interact with a particular light bulb, but I also think that it might be > useful to select "all the light bulbs in this room" for instance. > > > Would supporting the ability to select more than one Thing at a time be > useful? Do you see what API changes could do the trick? (That feature could > actually be useful for a future version of the Presentation API) [Louay] > Completely agree on this I think I see more relevance in the Thing API for > multiple select than for the Presentation API because in the Presentation API > when you start a PresentationRequest the presentation page will be > launched when the user selects a display but in the Thing API proposal the > web page will only get the thing. > To support multiple changes an array of things can be passed to the page > when the Promise is resolved instead of only one (input things instead of > thing): > ThingRequest(filter).start().then(function(things){...}).catch(function( > err){...} > ); > > > Also, although we're addicted to screens, the typical number of available > displays for the Presentation API should remain pretty small in most > contexts. There may be more Things to choose from, which might mean that > the list could grow out of control. I suppose that the user agent could be > smart enough to group things together but that would require additional > logic on their side. > [Louay] Yes this is a feature of the UA how to show the Things in the Dialog. > E.g. > the UA may offer groups and search field and also sort things according to > user preferences or proximity. > > > Or it may be that the light example is not a very good one. > Requiring the user to select a light in a list just to be able to switch it on or off > may not lead to the best user experience. The API may be much more useful > to interact with more complex things: > the user won't have many of them and selecting only one will be the default > need. What do you think? > [Louay] yes I agree if the user always needs to select a Thing from the Dialog > to interact with it is not a good user experience. This is why in my API > proposal there is another function > navigator.things.getById(id).then(...) which is relevant for things that are > already selected by the user. > This means the user needs to select a Thing from the dialog only once in most > case. The dialog is needed to get access to NEW things which are not > available yet for the web page. The web page can use in addition the > function > thing.getReachability().then(function(reachability) { > handleReachabilityChange(reachability.value); > reachability.onchange = function() { > handleReachabilityChange(this.value);} > }); > to watch the reachability of a Thing. This is very similar to the > getAvailability() function in the Presentation API. > > > Thanks, > Francois. > > [1] https://w3c.github.io/sensors/ > > > — > Dave Raggett <dsr@w3.org> > >
Received on Tuesday, 13 October 2015 16:49:56 UTC