Re: Moving forward with SDP control

On 7/18/13 4:48 PM, Harald Alvestrand wrote:
> On 07/18/2013 02:21 PM, Stefan Håkansson LK wrote:
>> I have two "Why+What's" and one "How":
>>
>> 1. Pause/resume sending of a MediaStreamTrack
>> ---------------------------------------------
>> Why: There are many scenarios where it would make sense to be able ask
>> the browser to send a MediaStreamTrack (while keeping RTCP going). One
>> example is in a multiparty service using simulcast. If the large scale
>> version of your video is not shown for any of the other participants in
>> the session, it makes sense to pause the sending (but be able to quickly
>> resume if a talker switch means your video will be shown).
>>
>> Currently we have "disable" on MediaStreamTrack level, but that
>> corresponds to sending blackness /silence.
>>
>> Note also that the very first use-case in [1] talks about the
>> possibility to pause sending of audio and video.
>
> Just to make certain: Do you mean "sender-initiated pause and resume" here?
> We have had discussions in the past where people talked about both
> sender-initiated
> pause and receiver-initiated pause (sometimes called "on hold").

Here I mean sender initiated. Of course the discussion could be extended 
to receiver initiated, but let's not do that now.

>
>>
>> What: This depends on how we signal pause/resume. *If* this is done by
>> using sendonly/recvonly/inactive in the SDP this is what I think should
>> be available in an API. (I still think RTCP signaling for this makes
>> more sense.)
>
> Another question is whether we need to signal it at all in version 1.0 -
> when I try it with Chrome, the bandwidth of a disabled audio/video call
> seems to be ~13 Kbits/sec (down from 300+ for a non-disabled call).
>
> Blackness compresses well.

Yes, that is an open question, I think it would be nice to have but I'm 
not going to push super hard for version 1.0 support.

>
>>
>>
>> 2. Setting BW for a MediaStreamTrack
>> ------------------------------------
>> Why: There are situations where a suitable start bit-rate can be known,
>> or guessed. If this knowledge could be used the perceived end-user
>> quality could be improved (since a higher quality is available from
>> start since there is no need to start at a really low bit-rate).
>>
>> There are also situations where it could be beneficial if min and max
>> bit-rates to be used can be influenced.
>>
>> * The app developer may know that below a certain bit-rate the quality
>> is so bad that the browser could stop sending it, and likewise there may
>> be knowledge about a bit-rate above which the quality does not improve.
>>
>> * There are situations when there is an agreement between the service
>> provider and the connectivity provider about min and max bit-rates.
>>
>> What: Again, this depends on how much BW info is included in the SDP.
>> But my understanding is that there should be some (since RTCP rates to
>> be used are based on this info IIUC).
>
> SDP has the b=AS: number, which can be specified at the m-line level or
> at the session level.
> Now that we have one m-line per MediaStreamTrack, it seems logical that
> we can use that to signal the desired bandwidth.
>
> However .... the sender can just send at the desired bandwidth, no need
> for signalling.

Yes, but I have been told that the receiver will calculate the RTCP rate 
it uses for receiver reports based on b=AS - if true we would need some 
way to determine what it should be (and if the app should be able to 
influence).

> The receiver ... will have to use signalling, either in
> SDP or outside SDP.

Do you mean if the receiver wants to e.g. limit the bit-rate the sender 
sends with? Seems simplest to signal outside the SDP (given that the 
sender has API surface to apply the received value on).

>
> Which one were you thinking of?

Sender side.

>
>>
>> (3. Other stuff, but not signaled
>> --------------------------------
>> There are other parameters on how media (per MediaStreamTrack) is
>> handled when to be sent over the network that do not need any signaling
>> - such as priority, type of audio (to determine if AGC should be used or
>> not), etc., mentioned in [1]. While not signaled, it could make sense to
>> have the same API surface for this kind of settings.)
>>
>> How:
>> ----
>> I think that what I proposed a long time ago [2] is not completely
>> broken. I mimics the DTMFSender pattern, it re-uses the constraint model
>> that we already have in "Media Capture and Streams" [3], it offers one
>> surface to control parameters related to sending a MediaStreamTrack over
>> the network. It also does not break any existing applications since the
>> use of it is not needed. Using certain constraints or methods would lead
>> to a "negotiationneeded" event since the SDP is affected, others would
>> not (since they will not need any signaling).
>>
>> But I'm sure we will see better proposals.
>>
>> Stefan
>>
>> [1]
>> http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requirements/?include_text=1
>> [2]
>> http://lists.w3.org/Archives/Public/public-webrtc/2013Jan/att-0005/PrioAPI.pdf
>> [3] http://dev.w3.org/2011/webrtc/editor/getusermedia.html
>>
>>
>>
>> On 7/16/13 2:19 PM, Harald Alvestrand wrote:
>>> Hi all,
>>>
>>> Recently there has been a lot of discussion (primarily in the
>>> IETF/rtcweb space though, but this topic really belongs here) about the
>>> desire to meet most use-cases without having to parse, modify or
>>> construct SDP.
>>> This was discussed already as part of the discussion on whether
>>> PeerConnection and SDP should be maintained or not last year [1].
>>>
>>> In the meantime, a number of API extensions have been created, notably
>>> the constraints setting and modification interfaces, which seem likely
>>> to be useful in achieving the goals people seek to achieve by SDP mangling.
>>>
>>> However, this work has not progressed very quickly, or very comprehensively.
>>> It may be time for a more structured approach.
>>>
>>> We think it makes sense to divide the information needed into subcategories:
>>>
>>> * Define the use cases for which SDP mangling is currently thought to be
>>> required - the "why" of the SDP tweaking.
>>>
>>> * Propose what parameters one should be able to control/influence
>>> without having to do SDP mangling. A proposal should describe what the
>>> current API specification produces, what the needed mangling is, and
>>> what the desired effect of the mangling is - the "what" of the SDP tweaking.
>>>
>>> * Propose suitable API surfaces to control/influence how media is
>>> encoded and transported over the network - the "how" of the SDP
>>> tweaking. We think that a requirement should be that working
>>> applications do not break when adding this surface - if it is not used
>>> things should work as today.
>>>
>>> Someone may make a proposal encompassing all 3 pieces of information
>>> (why, what and how) - or just some of the first ones - or a proposal for
>>> a latter one that builds upon others' proposals (a "how" building on
>>> someone else's "why" and "what"). But we would not want to consider a
>>> "how" without a "what", or a "what" without a "why" - it just becomes
>>> impossible to figure out whether the original requirement is satisfied
>>> if we don't build all 3 layers of the proposals.
>>>
>>> Does this sound like a way we could move forward?
>>>
>>> Harald for the chairs.
>>>
>>> [1] http://lists.w3.org/Archives/Public/public-webrtc/2012Sep/0098.html
>>>
>>>
>
>


Received on Thursday, 18 July 2013 15:03:56 UTC