Re: Constraints at request time (Re: Revised Constraints modification API proposal)

On Aug 23, 2012, at 9:50 AM, Adam Bergkvist wrote:

> On 2012-08-23 12:14, Harald Alvestrand wrote:
>> Forking this thread.....
>> 
>> On 08/22/2012 08:35 PM, Travis Leithead wrote:
>>>> From: Dan Burnett [mailto:dburnett@voxeo.com]
>>>> On Aug 21, 2012, at 4:56 PM, Travis Leithead wrote:
>>>>> As a reminder, the goal of this proposal is to facilitate "informed
>>>>> constraints" (i.e., allow constraints to be applied after existing
>>>>> client capabilities are known) in order to avoid potential pitfalls of
>>>>> blindly over-constrained use of getUserMedia across a range of different
>>>> devices.
>>>> 
>>>> I always wanted to have capabilities along with constraints, and I'm pleased
>>>> to see (finally) a realistic capabilities proposal.  Given that we will not always
>>>> have capabilities available for privacy reasons, I'd like to understand better
>>>> these "potential pitfalls of blindly over-constrained use".  I have heard that
>>>> mentioned but have not yet seen enough evidence to convince me that this
>>>> is a real problem.  Could you point me to some examples?
>>> [TODO]
>>> 
>>> 
>>>>> A secondary goal is to provide the right set of APIs for uniformly
>>>>> working with the devices that supply the local media stream tracks,
>>>>> for future APIs and scenarios we may wish to add.
>>>>> 
>>>>> If this proposal is adopted, I would expect that the existing
>>>>> constraint usage in getUserMedia could be significantly scaled back,
>>>>> if not removed altogether.
>>>> With the device-level only focus below, I don't think that would quite work.
>>>> One of the goals of the constraints approach was to free a developer who
>>>> just wants "a camera" from having to worry about which one.  They just
>>>> specify their constraints, mandatory or optional based on their needs, and if
>>>> they get something it is something they can use.
>>> I don't fully understand why the "device-level focus" prevents what you said
>>> from working. Even with the spec-as-is, this only gives a developer one-shot to
>>> get the right settings. For example, if they specify a constraint for some camera
>>> setting (perhaps white balance or flash?) and when they get a chance to look at
>>> the result stream and don't like it... well then you would probably expect that
>>> there would be another setting for white balance or flash on the track. So, we're
>>> in a situation where we have a set of constraints, and a set of settings which are
>>> basically duplicates. With the proposal below, it consolidates these concepts. Using
>>> the proposal below, I would expect:
>>> 
>>> 1. User requests "video" from getUserMedia (because they want "a camera")
>>> 2. If there's some device that provides video, it is provided as a track on a local
>>>      media stream.
>>> 3. Now, the developer applies the same constraints to that track, and if the
>>>      contraints were applied, then we're good to go.
>>> 
>>> Not too much more complicated than the current draft. If the developer doesn't
>>> like the result, they can just re-apply a different set of constraints to the same
>>> track, and see if that works for them. This IMO is much simpler.
>> I still don't understand why it is simpler.
>> 
>> If the UA knows what constraints the application desires, he can offer
>> the user only the choice of cameras that satisfy the constraint.
>> 
>> If the UA doesn't know, the dialog becomes:
>> 
>> App: "UA, please give me a camera"
>> UA: "User, Please give permission to access a camera"
>> User: "OK, I give it camera 1"
>> App: "User, I didn't like that camera, please try another"
>> UA: "User, Please give permission to access a camera"
>> User: "OK, I give it camera 2"
>> App: "OK, that camera worked"
>> 
>> Having an adequate user dialog to explain that the camera he picked the
>> first time is unsuitable, and how to tell which camera is which so that
>> the user doesn't pick it again, is left as an exercise for the
>> application designer.
>> 
>> With the ability to apply changed constraints, we can still do:
>> 
>> UA: "User, please give me a camera" (some constraints apply)
>> User: "OK, I give it camera 1"
>> App: "User, does this picture look OK to you?"
>> User: "No, let's try this tweak"
>> App: "OK, tweak applied. Better?"
>> 
>> I *like* applying changed constraints.
> 
> I've also though about this initial filtering that applying constraints before presenting device options to the user provides. However, I think we win more with the new approach than we loose.
> 
> Even with constraints applied after device selection we need some small set of options/constraints to filter the device list presented to the user. We have the obvious "audio" and "video", but we may also need something to hint about camera direction where it's possible ("front"/"user" "back"/"environment"). If this proves to be a huge problem, we could also look into some quality hints here like "hd", "vga" and so on. It won't solve everything but I think it will cover a lot of cases.

Let's please not go all the way back to hints.  We started there and ended up in a better place with constraints on getUserMedia.  If you want to know the reasons to go beyond hints, please re-read my post on that many months ago at http://lists.w3.org/Archives/Public/public-media-capture/2012Feb/0041.html.

> 
> /Adam
> 
> 

Received on Thursday, 23 August 2012 14:46:10 UTC