- From: James Ingram <j.ingram@netcologne.de>
- Date: Tue, 05 Jun 2012 15:48:42 +0200
- To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
- CC: Chris Wilson <cwilso@google.com>, public-audio@w3.org
Hi Jussi, > Garbage collection isn't necessarily a problem, since > implementations will probably just use JS wrappers for the > messages and the data will actually be stored in an underlying > struct, and MIDI isn't exactly one of the highest traffic > protocols anyway. > > I was thinking of situations in which there have to be large > numbers of messages in memory waiting to be sent (maybe tens or > even hundreds of thousands of them). But there are probably > strategies for minimizing the problem (see below). > > This is actually where the timestamps shine. You can have a clock > interval, like 200 milliseconds, where you proceed reading a list of > events and queue the events that are going to occur in the following > 200ms, and send them to be played at respective times, without a need > for individual setTimeouts for each event which is very CPU-intensive, > and not to mention that events that are supposed to occur at the same > time don't necessarily do so, maybe due to GC or rendering or whatever > is blocking the next timeout. I think the penny just dropped! Thanks for your patience. Is this actually working in Midibridge? Perhaps there should be an example in the docs... > 35 seconds is quite a long time to read a score (our JS audio codecs > can read complete songs in a matter of seconds), I suggest reading the > score in a worker thread using a JS-based DOM library to avoid the > overhead that might come with browser built-in DOM. Am looking into this too, of course. Thanks again, all the best, James
Received on Tuesday, 5 June 2012 13:51:57 UTC