W3C home > Mailing lists > Public > public-webrtc@w3.org > January 2012

Capability-related questions to be answered

From: Dan Burnett <dburnett@voxeo.com>
Date: Thu, 26 Jan 2012 06:18:42 -0500
Message-Id: <C543B9B1-71C8-4A15-B74C-D69BEEA3A5AD@voxeo.com>
To: public-webrtc@w3.org
Based on the emails over the last few days, here is a combined list of core questions related to Capabilities that I think need to be discussed and answered, possibly at the f2f meeting next week.  Only the final one is syntax-related, as I have tried to keep this high-level.

Others should feel free to add any core questions that I missed.

-- dan

1. Privacy -- what level and quantity of capability information can be shared?
  a. Does the answer vary based upon the state of the call, e.g., before the "call" is accepted, before any media streams are requested, after specific devices/streams have already been requested previously, etc.?
  b. Does the answer change if we require end user permission first?
  c. How much of the end user permission needs to be granted at each use, how much can be granted for a time period, based on site, etc, and how much can be established through browser configuration/settings?

2. Relationship to Hints API.
  a. Does the Capabilities API depend upon the Hints API in any form?  Is it possible to determine the minimum information necessary in the Capabilities API without fully understanding how that information will be used in generating Hints?  The question about whether or not we need a Capabilities API at all is a subset of this general question.
  b. Assuming we need a Capabilities API, is a separate Capabilities registry required?  If we decide that the the Capabilities API will just be the Hints API in reverse, there is no need for a separate registry and little need for additional syntax.

3. Interactions that must be described in the Capabilities API
  a. What level of interaction within a media type/stream and among types/streams must the Capabilities API describe?  Example interactions it might describe are:  allowed combinations of width and height for video, allowed combinations of pixel count and bandwidth for video, allowed combinations of audio and video (since a combined camera/microphone array might have interrelated features such as moving the camera to look in the direction of the loudest sound)
  b. Depending on the interactions that must be described, how can we efficiently represent these interactions without introducing a combinatorial explosion in the returned capabilities list?
Received on Thursday, 26 January 2012 11:19:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:17:25 UTC