As a developer, I fully understand that the current version of the API is rather large and the group probably has their hands full fleshing out the specification for the existing components.
However, I do consider having better support for spectral processing to be the biggest "hole" to fill in a version 2 (well, perhaps second to web worker script processors). I don't like the idea of having spectral data flowing around the graph - it seems too complicated. Instead, what would be most convenient for me would be a "Spectral Processor Node", which behaves exactly like the script processor node, except an FFT happens before and an IFFT happens after. There would be explicit controls for analysis window size, window shape, resynthesis window shape and window overlap. Any "spectral graph" based processing would have to be handled fully by the client in javascript. I just don't see spectral graphs being a common enough case for me to warrant the giant increase in complexity and mental overhead.
---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/248#issuecomment-25979152