Re: Resolution to republish MSP as a note

On Wed, Aug 8, 2012 at 11:21 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> On Wed, Aug 8, 2012 at 8:21 PM, Chris Rogers <crogers@google.com> wrote:
>
>>
>>
>> On Wed, Aug 8, 2012 at 7:35 AM, Jussi Kalliokoski <
>> jussi.kalliokoski@gmail.com> wrote:
>>
>>> On Wed, Aug 8, 2012 at 4:25 PM, Stéphane Letz <letz@grame.fr> wrote:
>>>
>>>>  >
>>>> > I'm probably badly misinformed, but the value of high priority
>>>> threads seems a bit vague to me, since I'm not sure about what's the OS
>>>> support level for high-priority threads, I think for example in Linux you
>>>> still have to compile your own kernel to get real high priority thread
>>>> support.
>>>>
>>>> No. You would possibly need a special kernel for very ¨*low latency*
>>>> thread scheduling, but not for RT scheduling and thread priority
>>>> management. A regular Linux kernel is now quite usable, assuming the audio
>>>> thread can take RT scheduling capability, which is given using Realtime Kit
>>>> in PulseAudio AFAICS or correctly setting a special "realtime" group with
>>>> appropriate values (see here for JACK:
>>>> http://jackaudio.org/realtime_vs_realtime_kernel and
>>>> http://jackaudio.org/linux_rt_config)
>>>>
>>>
>>> Thought I'd be misinformed! Thanks for the clarification, and sorry for
>>> the mixup.
>>>
>>>
>>>> On OSX real-time threads are actually "time constraints" threads, that
>>>> are going to preempt any other non RT thread and are "interleaved" with
>>>> other RT threads. The CoreAudio callback will run in a real-time
>>>> constraints started and configurated by the CoreAudio frameworks for the
>>>> audio application.
>>>>
>>>>
>>>> > And using high-priority threads might not always even be desirable,
>>>> for example in low-end devices it'd be horrible if the UI became completely
>>>> unusable because an audio thread was occupying the whole thread.
>>>>
>>>> But if not RT, then the audio will "glitch"... Do we want reliable
>>>> audio? or not?
>>>>
>>>
>>> I think you mean to ask "do we want audio in RT threads", because even
>>> that doesn't always warrant reliable audio nor does not having it exclude
>>> reliable audio. The answer to that question would be sometimes yes,
>>> sometimes not. Glitchless audio isn't worth much if the application becomes
>>> otherwise completely unusable. Is high-priority audio threads a feature
>>> that warrants for the complexity that comes with the native nodes?
>>> Especially given that we still have the possibility of RT thread workers
>>> open.
>>>
>>> I'm pretty sure that for example my Android phone doesn't run it's audio
>>> in a real-time thread, even networking connections can sometimes glitch the
>>> audio. But it's never bothered me, I'd actually rather have the UI in an RT
>>> thread like iOS does and have that always go before the audio and anything
>>> else for that matter. I'm pretty sure I'm not the only one.
>>>
>>
>> But many people have asked for improvements to the Android audio
>> performance and do not appreciate high-latency and glitches.  I know that
>> iOS *does* use high-priority threads and it works great for them, so your
>> argument seems to be rather weak.  Believe it or not, I think there will
>> actually be many people who are interested to process live audio in
>> real-time in web applications, or to play synthesizers using the MIDI API.
>>  Just because we've had terrible performance on the web with Flash, etc.
>> doesn't mean we have to stay in the stone age, lagging so far behind the
>> desktop audio applications abilities.
>>
>
> It wasn't really an argument, it was just my personal opinion. And I'm not
> suggesting we have bad performance, I'm suggesting a different approach at
> tackling performance issues. I agree that RT threads offer benefits in some
> cases, but some cases they don't and it should be up to the developer to
> decide what takes priority in his/her application. Hence I'd rather we try
> to get RT thread support for workers so that one can just decide whether to
> use a real-time thread or not by choosing the type of worker to use. If we
> had that, what on earth would be lagging behind desktop audio applications'
> abilities?
>

But Jussi, I'm approaching the problem from the perspective of what is
possible to do today using well-known techniques, and not wishful thinking
of something which might be possible five years from now.  We simply don't
have the level of technology in our JavaScript runtimes, garbage
collection, blocking calls, the taking of locks, threading issues, etc., to
deliver the kind of performance which people expect, and will compare to
desktop/native applications.  In the meantime, people are asking for
advanced audio features now.

Because audio is deadline-driven, you always need to be concerned with
worst-case performance and not average case performance (for gc etc.)
Here's an interesting link which explores some of these issues:
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

Chris

Received on Wednesday, 8 August 2012 19:01:17 UTC