- From: Clarke Stevens <C.Stevens@CableLabs.com>
- Date: Tue, 6 Dec 2011 15:11:48 -0700
- To: WebIntents <public-web-intents@w3.org>
- Message-ID: <CB03DCB5.15EC7%c.stevens@cablelabs.com>
My question on the use case is related to context and state. It seems that the primary model for Intents is that (please correct me if I'm wrong) 1. A (verb) request is made 2. A list of (nouns) that can handle (verb) is returned 3. The user selects from the list of (nouns) the one to handle the (verb). Does the user application know which (noun) handled the (verb)? Here's the potential problem: If the (verb) is "on" then perhaps several dozen devices in my home will respond. In the case of my big TV, I actually want a specific TV in front of me to turn on as well as the surround system connected to it. When I turn up the volume on my TV, I really want that command to go to the surround system, not the TV. When I turn off the system, I may need to know that the TV and the surround system are both on in case the power is a toggle switch and I don't put them in the wrong state. When I go to a different room in the house I want to control a different TV. The point is, something has to keep track of context and state. An application can certainly do this, but does this work with Intents? The user can't be bothered with selecting the (noun) every time the (verb) is invoked. Does Intents give enough information to the user application so that it can keep track of this? Also, what happens if the user browses to a different web page? Is all that information lost? As I've said before, I'm convinced that Intents can be used to discover devices, but I still have a lot of questions about whether it is the right tool for the other communications tasks. Thanks, -Clarke From: timeless <timeless@gmail.com<mailto:timeless@gmail.com>> Date: Tue, 6 Dec 2011 12:07:27 -0700 To: WebIntents <public-web-intents@w3.org<mailto:public-web-intents@w3.org>> Subject: Web Intents - Scenario: TV System (part 6) While this article talks about a TV, I'm really only using a "TV" because it's something with which most people are familiar. TVs traditionally have a number of knobs to control unrelated settings, many also support infrared remote controls. Often we're so lazy that we avoid walking to our TVs and instead rely on these remotes to control them instead (or we may claim that by doing this we avoid interrupting everyone else's view). I've been meaning to introduce detailed use cases for a while. Home Media Centers are much more complicated than the TV below, mine [1] certainly is. §13 TV Controls Scenario. Imagine a TV as having the following inputs: A. Power button B. Mute button C. Volume spinner D. Source selector E. Channel selector F. Brightness selector Each of these is commonly found on a TV, and one can often get a dozen remotes that can all control some of these items on that same one TV. There is no requirement that all remotes have buttons to control all of these items (some remotes are more limited/simplistic than others, some are less universal than others, and some have been programmed so certain buttons control other devices instead). If we accept controlling each of these as an independent action (and hopefully you do), then we can assign intents for them. But obviously the TV wants to advertise supporting all of them. And that's fine. When I get a programmable remote, I teach it about a device, and then the device lets me select which actions I want to program for that device. Supporting roughly that is the goal. §14 Use Case: Programmable remote control for a Home Entertainment system. Steps: 1. User's UA discovers HN (TV, Stereo, VCR, DVD) 2. User loads "programmable remote control" web page 3. Page has buttons for: A. Power button B. Mute button C. Volume spinner D. Source selector E. Channel selector F. Brightness selector 4. User presses "power button" (A) on page 5. Page triggers <power> action (Ai) 6. UA offers list of "powerable" devices: TV, Stereo, VCR, DVD 7. User selects TV 8. UA remembers all Actions the TV supports but only maps TV -power to page (Ai:TV) 9. Page triggers <power> action (At) 10. UA sends TV the power signal (At:TV) -- wake on LAN? 11. User selects "channel up" (E) on page 12. Page triggers <adjust-channel> action (Ei) 13. UA has a list of channel-adjustable devices: TV, VCR * our stereo doesn't do am/fm/xm (don't ask) 14. UA suggests TV indicating it's already used for <power> action by this page, but offers the user the choice of the VCR 15. User chooses TV 16. UA remembers maps TV-adjust-channel to page (Ei:TV) 17. Page triggers <adjust-channel> action "up" (Et) 18. UA sends TV the adjust-channel "up" signal (Et:TV) 19. User presses "volume up" (C) 20. Page triggers <adjust-volume> action (Ci) 21. UA has a list of volume-adjustable devices: TV, Stereo, VCR, DVD 22. UA suggests TV indicating it's already used for <power> and <adjust-channel> actions by this page but offers the user the choice of the Stereo, VCR and DVD 23. User chooses Stereo 24. UA maps stereo-adjust-volume to page (Ci:Stereo) 25. Page triggers <adjust-volume> action "up" (Ct) 26. UA sends stereo the adjust-volume "up" signal (Ct:Stereo) We've now mapped 3 actions to our page. User is happy. Ideally the UA will remember these mappings so the next time the page loads, the user won't need to map them unless an action fails or the user wants to adjust them. Note that in this step list there are two trigger phases. I don't think this distinction will exist in the API we'll select, although I could imagine apps cheating by choosing state-query messages for their initial intent message (or some other null action, e.g. "increase volume by 0"). [1] http://lists.w3.org/Archives/Public/public-device-apis/2011Nov/0087.html
Received on Tuesday, 6 December 2011 22:14:18 UTC