Re: A Big Proposal: A way to control quality/resolution/framreate/simulcast/layering with RtpSender

So instead of

{
  bias: "framerate" // "quality"
}

You'd want:

{
  spatialTemporalBias: 0.0  // 0.0 == "quality", 1.0 == "framerate
}

?


That was one of the variants of the design that we had.  It might be a good
solution.  The biggest reason why we changed it to just "bias" is that I
couldn't think of a use case where you'd care to have it anything other
than 0, 1, or unset.  That and I didn't like the name :).


On Thu, Feb 20, 2014 at 5:18 AM, Chris Wendt <chris-w3c@chriswendt.net>wrote:

> I also want to re-iterate the idea that targetQuality represents a slider
> between 0.0 and 1.0 where 0 is try to allocate as many bits toward picture
> quality and 1.0 is try to push out as many frames per second, based on
> input framerate as max, as possible, no matter what happens to picture
> quality.  0.5 would be the even tradeoff between quality and framerate.
> For this, I think you no longer need a bias, and maybe this is similar to
> what Martin was suggesting before.
>
> Also, bringing audio codecs in the mix, the 0-1 scale for quality may map
> to preferring higher sampling frequency vs. encoding quality at lower
> sampling frequency.
>
>
>
> On Feb 20, 2014, at 8:03 AM, Chris Wendt <chris-w3c@chriswendt.net> wrote:
>
> Could also have something like target bitrate and target quality with
> optional min and max bitrate and quality as a callback to application layer.
>
>
> On Feb 20, 2014, at 2:26 AM, Justin Uberti <juberti@google.com> wrote:
>
> For min, you can shut off the stream where there aren't enough bits, but
> what should you do when the stream goes over the application-defined max?
>
> I am very open to having the app do much of this stuff dynamically, since
> it means the static configuration controls can be simpler, but I'm not sure
> we can toss maxBitrate/maxQuality.
>
>
> On Wed, Feb 19, 2014 at 8:32 PM, Chris Wendt <chris-w3c@chriswendt.net>wrote:
>
>>
>> Maybe I’m reading more into this or maybe I’m making it more complex than
>> necessary, or maybe i’m stating the obvious, but I think this makes sense.
>>
>> My originally assumed premise was that you setup a stream with a set of
>> parameters and let the implementation map those parameters to video encoder
>> rate controls and it runs happily along independently doing adaptive
>> bitrate things within the context of the given parameters.
>>
>> Taking things to the next level, we can imagine having and likely
>> requiring application level control, particularly for SVC and simulcast or
>> in cases where you have multiple RTPSenders, where we need to control
>> across multiple potentially unrelated and independent media streams and
>> based on some metrics feedback that, for example, quality is suffering
>> given some bitrate constraints, so I should switch to half resolution, or
>> no longer send the 3rd enhancement layer or reduce the number of RTPSenders.
>>
>> So what would be the feedback mechanism here?  Is this up to the
>> application developer to monitor metrics themselves if even possible or
>> practical?
>>
>> I would propose we think about providing callback hooks based on things
>> like min/max bitrate and min/max quality thresholds.  And maybe
>> specifically define the basic set.
>>
>> Again, I think the current discussion around the simple set of parameters
>> satisfies the static config case, but when we talk about max or min type of
>> parameters, I think there needs to be some feedback to notify that
>> application when the min/max point is reached so it can act appropriately.
>>
>> Thoughts?
>>
>>
>>
>> On Feb 19, 2014, at 6:52 PM, Martin Thomson <martin.thomson@gmail.com>
>> wrote:
>>
>> > On 19 February 2014 15:44, Peter Thatcher <pthatcher@google.com> wrote:
>> >> I think we still need scale for simulcast and maxBitrate for cases
>> where you
>> >> want to constrain bandwidth even when it's available.    And priority
>> vs.
>> >> resources seems pretty similar.
>> >
>> >
>> > I'd like to take simulcast out actually.  I think that aside from some
>> > bindings necessary to get playback right, you can achieve simulcast
>> > transmission (what we are talking about here) by having multiple
>> > tracks with different resolution constraints.  I don't think that
>> > means fewer options sadly.
>> >
>> > Maybe we can also discuss minimums.  I don't think that it's
>> > worthwhile having minimum values initially, and maybe not ever, though
>> > I'm open to the idea.  And I think that it's a universally applicable
>> > thing across all axes.  I can see cases for minimums on all three:
>> > frame rate (sign language), resolution (1x1, my image recognition
>> > can't deal), and quality (my eyes, ow, my eyes).
>>
>>
>>
>
>
>

Received on Thursday, 20 February 2014 16:16:21 UTC