W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > May 2017

Re: [webrtc-stats] RTCMediaStreamTrackStats.audioLevel clarification

From: henbos via GitHub <sysbot+gh@w3.org>
Date: Wed, 10 May 2017 12:41:23 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-300469657-1494420081-sysbot+gh@w3.org>
The referenced takes length number of samples of 0..MAX_VALUE (127 for 1 byte per sample), normalizes it to 0..1 and converts to the rms (root mean square) 0..1. It calculates db = 20 * log10(rms), clamps it between -127 and 0 and rounds it to an integer.

So... our audioLevel is the rms value? I get not wanting to use the sample value, 0..MAX_VALUE, as to be agnostic about the number of bytes used per sample, but why do we pick an audioLevel value that is the rms value? Why not the db value? We don't have to round it to an integer if we're afraid of loss of precision.

GitHub Notification of comment by henbos
Please view or discuss this issue at https://github.com/w3c/webrtc-stats/issues/193#issuecomment-300469657 using your GitHub account
Received on Wednesday, 10 May 2017 12:41:29 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 6 May 2023 21:19:40 UTC