- From: Harald Alvestrand <harald@alvestrand.no>
- Date: Fri, 15 May 2015 12:33:38 +0200
- To: public-media-capture@w3.org
Interesting - I'd completely forgotten about sampleRate and sampleSize.
sampleSize is in bits and defined only on linear-sampling devices, so
it's likely to be 8, 16 or 24.
sampleRate is usually 8000, 16000, 44100 or 48000 (192000 at the extreme).
So both these refer to a single audio sample; latency and sampleCount
would be completely equivalent:
latency = sampleCount / sampleRate
sampleCount = latency * sampleRate
But if the user specifies sampleCount without specifying sampleRate, he
might get a completely different latency from what he wanted; it seems
unlikely that the user's tolerance for latency would increase with worse
sound quality.
What does the user care about?
Den 14. mai 2015 03:41, skrev Jonathan Raoult:
> Sorry guys I just jumped in this thread.
>
> I'm very interested in this discussion specially on the low latency
> side. I recently hit the "optimum buffer size for everyone" wall with
> getUserMedia and I would need something to adjust latency on capable
> platform at least.
>
> What I noticed in music creation softwares (and other audio API as well)
> is the use of frame count as input to adjust latency. Then the result in
> ms calculated but only for display purposes. It would fit well with
> sampleRate and sampleSize from MediaTrackSettings which are already low
> level enough for the user to infer the latency in ms. It also have the
> advantage of being precise, there not rounding or calculation to make
> for the implementation.
>
> So to come back to the example, something like that is another solution:
>
> { sampleCount: { max: 20 } }
>
> Jonathan
>
>
>
Received on Friday, 15 May 2015 10:34:09 UTC