- From: Robert O'Callahan <robert@ocallahan.org>
- Date: Sat, 25 May 2013 15:21:55 +0800
- To: John Byrd <jbyrd@giganticsoftware.com>
- Cc: "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CAOp6jLZRqvt4x_13_+AtFguKB6NKOir5yQJEXZMnsYXezc0H9A@mail.gmail.com>
On Sat, May 25, 2013 at 1:48 AM, John Byrd <jbyrd@giganticsoftware.com>wrote: > In short, while the Web Audio API is great for JavaScript programmers, and > the functionality I describe can certainly be implemented in JavaScript > above the API layer, -all- audio applications will eventually need or > desire the functionality I describe or some part of it, and therefore it > might be of use to consider standardizing the process for serializing > portions of and/or all the current Web Audio state. > Adding methods to AudioNodes to let you traverse the node graph would actually prevent us from doing some very important optimizations (i.e., silently deleting nodes that have completely finished playing and are not otherwise referenced). So we shouldn't do that. A Web app can explicitly keep track of the structure of the graph it has constructed (although they're likely to defeat the above optimizations by doing so). If you want to write a spec for how that is done, and/or write a spec for (de)serializing a node graph --- and implement a library that follows that spec --- go ahead. That can be completely separate from the Web Audio API spec itself, as long as it doesn't require changes to browsers. Rob -- q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
Received on Saturday, 25 May 2013 07:22:24 UTC