Re: Updated Stats proposal - May 13

On Mon, May 19, 2014 at 7:26 PM, Harald Alvestrand <> wrote:
> On 05/19/2014 11:27 AM, Varun Singh wrote:
>> Hi Harald,
>> Apologies for the late response, a couple of comments inline.
>> On Thu, May 15, 2014 at 12:31 PM, Harald Alvestrand
>> <> wrote:
>>> On 05/15/2014 11:17 AM, Varun Singh wrote:
>>>> Hi Harald, Jan-Ivar,
>>>> On Thu, May 15, 2014 at 11:29 AM, Harald Alvestrand
>>>> <> wrote:
>>>>> On 05/15/2014 05:57 AM, Jan-Ivar Bruaroey wrote:
>>>> [snip]
>>>>> Here are a couple of mozilla team additions for consideration:
>>>>> dictionary RTCInboundRTPStreamStats : RTCRTPStreamStats {
>>>>>      ...
>>>>>      long avSyncDelay; // audio lag in ms (negative = video lag in ms)
>>>>>      unsigned long jitterBufferDelay; // in ms
>>>>>      unsigned long roundTripTime;     // in ms
>>>>> };
>>>>> We discussed a bit about delay internally too - there's half a dozen
>>>>> goog*
>>>>> related delay variables - but decided not to propose anything, because
>>>>> these
>>>>> seem to be quite implementation dependent, and not really thought through
>>>>> -
>>>>> we'd like to look at the delay of the whole pipeline and have a
>>>>> monitoring
>>>>> application figuring out exactly where the time went, and our current
>>>>> variables don't really accomplish that.
>>>>> Jitter buffer delay is also one of those pesky "varies with time"
>>>>> variables
>>>>> where it's not clear to me if we want instantaneous snapshots,
>>>>> time-smoothed
>>>>> values or max/min (or all three).
>>>>> By the time I got that far, my head hurt, so I didn't propose anything.
>>>> Yes jitter buffer delay is pesky, but the instantaneous snapshot value
>>>> would
>>>> not be sufficient, the monitoring application would additionally need the
>>>> nominal jitterBufferDelay to interpret the instantaneous value (and both
>>>> might vary with time). Lastly, not to conflate the variables max/min could
>>>> be
>>>> maintained by the JS.
>>>> The definitions for the dejitter buffer are in
>>> If getStats is called every 10 seconds or so, max/min values maintained by
>>> JS would be different from max/min values maintained by the browser. So
>>> having browser-managed max/min has value.
>> Ok, I see the point, but this assumes that measurement reported by the browser
>> is sampled over a different interval, which may be true for the
>> dejitter buffer size metric.
> The normal way to get min/max values for a jitter buffer is to update
> them every time you change the jitter buffer size - it's just a machine
> instruction or two.
> Sampling the size every 10 seconds will miss a lot of highs and lows.

Do the highs and lows correspond to the max and min? In RFC7005,
the highs and lows correspond to the RTCP Interval and the
max corresponds to the current maximum size of the dejitter buffer,
which is implementation specific and may change during the lifetime
of the call.

>>> I note that the RFC 7005 definitions refer to "this reporting interval",
>>> implying that max/min gets reset every time a report is produced. This is
>> Not necessarily, the reported value is the current value at the time
>> generating the report.
>> The dejitter metric is a sampled value, which is calculated when each
>> packet arrives.
>> The definition of a sampled metric is in RFC6792,
>> copy pasting the relevant part below:
>>  Sampled metrics
>>       Metrics measured at a particular time instant and sampled from the
>>       values of a continuously measured or calculated metric within a
>>       reporting interval (generally, the value of some measurement as
>>       taken at the end of the reporting interval).  An example is the
>>       inter-arrival jitter reported in RTCP SR and RR packets, which is
>>       continually updated as each RTP data packet arrives but is only
>>       reported based on a snapshot of the value that is sampled at the
>>       instant the reporting interval ends.
> The reported value is the current value at the time generating the report.
> But what are the max and min values?
> For instance, if one generates two RTCP reports, and the buffer sizes
> occuring in the interval are:
> 1 2 4 9 4 2 -> 2 is reported as "current", 9 is reported as "max"
using the terminology from RFC7005:
the current is 2
the high is 9,
low is 1, and
max is 9.
This is assuming none of these frames were discarded.
If suppose 9 was discarded then the max would be 4.

> 4 2 1 7 2 1 -> 1 is reported as "current", 7 is reported as "max".
the current is 1,
the high is 7,
the low is 1,
max is 9 (if it was not discarded before and since then the buffer is
not adjusted.
This value is dependent on the underlying implementation of the jitter buffer
and how quickly or often it adjusts the buffer size.).

> That's my understanding. Is it wrong?
> The way we currently do stats, if we call getStats at the end of both
> intervals, we would get "9" both times - because getStats doesn't reset
> any values - and I think it should not.

The highs and lows are interval specific, the max is current maximum, it
may so happen that the underlying implementation never reduces the
buffer size once it expands it.

To reiterate, I think getStats should return the current maximum size
configured by the jitter buffer. If it is 9 then it should report 9.

I suppose expressing the high and the low watermarks as defined by RFC7005
are potentially an issue with the way getStats() is implemented.
I could live with nominal, current, max.



Received on Tuesday, 20 May 2014 11:46:55 UTC