- From: Adam Bergkvist <adam.bergkvist@ericsson.com>
- Date: Thu, 21 Jun 2012 11:40:59 +0200
- To: Stefan Hakansson LK <stefan.lk.hakansson@ericsson.com>
- CC: "public-webrtc@w3.org" <public-webrtc@w3.org>
On 2012-06-19 16:19, Stefan Hakansson LK wrote: > On 06/19/2012 04:04 PM, Adam Bergkvist wrote: >> On 2012-06-18 20:57, Cullen Jennings (fluffy) wrote: >>> >>> This seems like good proposal, one comment on a small detail. >>> >>> On Jun 15, 2012, at 1:28 PM, Justin Uberti wrote: >>> >>>> SessionDescriptionOptions.IncludeAudio = true/false // forces m=audio line to be included >>>> SessionDescriptionOptions.IncludeVideo = true/false // forces m=video line to be included >>>> SessionDescriptionOptions.UseVoiceActivityDetection = true/false // includes CN codecs if true >>> >>> I think these three should be constraints, not settings because a given browser may not support any of them. >> >> Shouldn't useVoiceActivityDetection be on addStream()-level? It seems to >> be a more appropriate place to do this since it's more codec >> setting/constraint rather than a setting on session-level. > > It seems more fitting at addStream; however, what about when you add an > audio track to a MediaStream that has already been added to PeerConnection? That's not well covered in the spec today. I think it would be pretty straightforward to map that to a constraint since we're only dealing with one track. On the other hand, I'm not very keen on having constraints everywhere either due to the risk of collisions and incompatibility issues. This makes me think of the addStream() case and constraints. At the last media capture TF meeting we talked about a way to avoid introducing multi-track support in constraints by allowing multiple getUserMedia() calls in the same event loop iteration to share the same UI (and only bug the user once). But problem is back again if we consider: pc.addStream(multipleAudioTracksStream, constraints); /Adam
Received on Thursday, 21 June 2012 09:41:30 UTC