Re: Specifying the audio buffer size

FYI, having read through this thread and being I assigned the task of
finding a resolution, I put in as the proposed resolution for LC-3022 (
https://www.w3.org/2006/02/lc-comments-tracker/47318/WD-mediacapture-streams-20150414/3022)
the following:


Add the following to the spec:

enum SupportedAudioConstraints {
    "latency"
};

dictionary MediaTrackConstraintSet {
  ConstrainDouble latency;  // seconds
};


I think this represents the consensus on this thread.

On Fri, May 15, 2015 at 6:12 AM, Joe Berkovitz <joe@noteflight.com> wrote:

> A few points with respect to the recent additions to this thread:
>
> 1. Use seconds. Otherwise constraint queries will return results that are
> not comparable in terms of user experience, due to sample rate differences
> between devices. I already pointed out that sample rates in other parts of
> an audio application may not be equal to the native sample rate of a device.
>
> 2. I agree that something like minimum or "typical" latency is of more
> interest than a maximum or guarantee. The idea is for applications to
> understand that a particular device may have high latency that is imposed
> by the stack, not that it might have some unknown potential for high
> latency.
>
> 3. Some devices may impose latency that has nothing to do with buffering
> in the UA stack or its host OS, but are external in nature. For instance
> remote-cast audio output devices like Chromecast and AppleTV have a very
> long lag. I don't see that it makes sense to think of such latency in terms
> of sample frames. That isn't like the latency values one sees in, say, ASIO
> driver configuration (which are typically expressed in terms of buffer
> size, but only because they reflect a world entirely internal to the
> driver).
>
> ...Joe
>
>
> On Fri, May 15, 2015 at 6:33 AM, Harald Alvestrand <harald@alvestrand.no>
> wrote:
>
>> Interesting - I'd completely forgotten about sampleRate and sampleSize.
>>
>> sampleSize is in bits and defined only on linear-sampling devices, so
>> it's likely to be 8, 16 or 24.
>>
>> sampleRate is usually 8000, 16000, 44100 or 48000 (192000 at the extreme).
>>
>> So both these refer to a single audio sample; latency and sampleCount
>> would be completely equivalent:
>>
>>   latency = sampleCount / sampleRate
>>   sampleCount = latency * sampleRate
>>
>> But if the user specifies sampleCount without specifying sampleRate, he
>> might get a completely different latency from what he wanted; it seems
>> unlikely that the user's tolerance for latency would increase with worse
>> sound quality.
>>
>> What does the user care about?
>>
>> Den 14. mai 2015 03:41, skrev Jonathan Raoult:
>> > Sorry guys I just jumped in this thread.
>> >
>> > I'm very interested in this discussion specially on the low latency
>> > side. I recently hit the "optimum buffer size for everyone" wall with
>> > getUserMedia  and I would need something to adjust latency on capable
>> > platform at least.
>> >
>> > What I noticed in music creation softwares (and other audio API as well)
>> > is the use of frame count as input to adjust latency. Then the result in
>> > ms calculated but only for display purposes. It would fit well with
>> > sampleRate and sampleSize from MediaTrackSettings which are already low
>> > level enough for the user to infer the latency in ms. It also have the
>> > advantage of being precise, there not rounding or calculation to make
>> > for the implementation.
>> >
>> > So to come back to the example, something like that is another solution:
>> >
>> > { sampleCount: { max: 20 } }
>> >
>> > Jonathan
>> >
>> >
>> >
>>
>>
>>
>
>
> --
> .            .       .    .  . ...Joe
>
> *Joe Berkovitz*
> President
>
> *Noteflight LLC*
> 49R Day Street / Somerville, MA 02144 / USA
> phone: +1 978 314 6271
> www.noteflight.com
> "Your music, everywhere"
>

Received on Friday, 22 May 2015 21:03:35 UTC