- From: Chris Rogers <crogers@google.com>
- Date: Thu, 4 Apr 2013 11:00:37 -0700
- To: Joseph Berkovitz <joe@noteflight.com>
- Cc: "public-audio@w3.org WG" <public-audio@w3.org>
- Message-ID: <CA+EzO0ne-2b2DVJi=KNY3=3G9=9ig2dOCBB0JXCwXVhNbco5ow@mail.gmail.com>
On Thu, Apr 4, 2013 at 7:31 AM, Joseph Berkovitz <joe@noteflight.com> wrote: > I didn't see any responses to my OfflineAudioContext proposals earlier in > this week, but I want to highlight one point in particular that is > troubling to me as an implementor and may require some thought as this > feature is speced. > > A synthesis graph for for a piece music may contain literally thousands of > audio sources, scheduled to turn on and off for the many notes in the > piece. Normally the graph doesn't contain all of these at once. Instead the > graph is replenished on a timer-driven basis (examining > AudioContext.currentTime) to ensure that there are always enough sources to > cover the next N seconds of music to be played. > > In the offline case this isn't possible because there are no interim > points in the rendering process at which to replenish the graph. The entire > graph must be built in advance and then rendered in bulk. > > Question: Is the size of such graph an inherent problem with the offline > rendering API as currently constituted? > We've stressed it in moderate ways and it seems fine. I doubt it should be an issue, but you could try some experiments... > > . . . . . ...Joe > > *Joe Berkovitz* > President > > *Noteflight LLC* > Boston, Mass. > phone: +1 978 314 6271 > www.noteflight.com > "Your music, everywhere" > >
Received on Thursday, 4 April 2013 18:01:17 UTC