Re: [webrtc-stats] Add stat for inputAudioLevel, before the audio filter

```
1. Source: Camera resolution
->
2. Constraints: E.g. downscaled video
->
Entering: WebRTC pipeline, the track is attached to a sender.
3. Sender knows input resolution (the per-constraint downscaled video).
->
The encoder is not exposed, but the sender's encoder encodes the video.
4. Sender knows output resolution (encoder might decide to downscale even more).
->
Sender creates RTP packets
->
IceTransport
->
Receiver gets RTP patckets
->
Jitter buffer, concealment, whatever happens to prepare the stuff for the decoder.
->
The decoder is not exposed, but the receiver's decoder decodes the video.
5. Receiver knows the resolution of the final track.
->
6. Possible post-procesing. I don't know if this happens, but it's conceivable that the WebRTC implementation decides that, if it's audio, "this is just silence", and mutes it, OR this could have been part of the decoding step.
Exiting the WebRTC pipeline.
->
7. The application might do additional processing through canvas etc, but now we have left "WebRTC land".
->
Render on screen.
```

Resolution may change 1-7, we can only provide getStats() for what is in the "WebRTC pipeline", e.g. 3-6.

Our current stats are for 3 and for 6 (or if we don't do anything at 6 I then the stats are for 5).
We could chose to expose more of these, but we cannot expose stuff that happens outside of the "WebRTC pipeline" without getStats() or equivalent on non-WebRTC primitives or on GETUSERMEDIA objects like MediaStreamTrack (note that the webrtc getStats() for track is not actually MediaStreamTrack stats but based on sender/receiver stats).

Am I missing anything?

-- 
GitHub Notification of comment by henbos
Please view or discuss this issue at https://github.com/w3c/webrtc-stats/issues/271#issuecomment-399038067 using your GitHub account

Received on Thursday, 21 June 2018 09:29:38 UTC