- From: Bran, Cary <cary.bran.standards@gmail.com>
- Date: Thu, 08 Dec 2011 19:04:59 -0800
- To: "public-webrtc@w3.org" <public-webrtc@w3.org>
- Message-ID: <CB06BADB.687F%cary.bran.standards@gmail.com>
Hi, I have heard a lot of discussion around getting access to the user's camera and microphone for WebRTC. One area where I am unclear and would like to as the WG is how will WebRTC interact with a headset that is connected to the host OS? If I missed the discussion on this list or it belongs in another WG please disregard this message and point me in the right direction. Here is a pretty basic use case I have been thinking of: I get a call from Alice, my user-agent (web browser) rings and I am wearing a headset connected to the host OS and click the button on the headset to pick up the call. The web application running on the user-agent recognizes the click event from the headset and starts the call with Alice with audio going to my headset. Durning the call my daughter comes in the room chasing the cat and I double click on the the headset and it mutes my audio, the web application shows that my call is muted and Alice does not hear the ruckus. My daughter leaves the room and I un-mute by clicking on the mute button in the web application, my headset un-mutes its mic. Alice and I finish the call and I click on my headset to end the call, the call ends and is reflected by the web application. I think the use case is different than just exposing the connected Bluetooth headset as a local mic/speaker, which I think would behave no differently that what has already been specified. What I am trying to illustrate in this use case is the user agent is aware that the host OS has a headset connected to it and can take advantage of the additional features provided by the headset by providing a mechanism for the headset to receive events from, and send commands to, the MediaStream and PeerConnection interfaces. Thoughts? -Cary
Received on Friday, 9 December 2011 03:04:40 UTC