- From: Marcus Geelnard <mage@opera.com>
- Date: Thu, 12 Dec 2013 09:05:23 +0100
- To: "gaito@g200kg.com" <gaito@g200kg.com>, public-audio@w3.org
2013-12-11 19:44, gaito@g200kg.com skrev: > I don't know the history but, > > As desribed in "1.3. API Overview", > AnalyserNode is defined for "visualization applications". > > And getByteFrequencyData() has well tuned default parameters > (minDecibels, maxDecibels and smoothingTimeConstant) > that fit in exactly with the visualizers. > I think the getByteTimeDomainData() is prepared for > easy switch of visualizer modes, Frequency/Waveform. > > Originally, the getFloatFrequencyData() may be uncalled-for. I still don't see a reason for using byte data instead of float data, though. With byte data you get a maximum graphical (typically vertical) resolution of 256 levels. That's quite low, e.g. if you wish to do a full-screen oscilloscope on a 1920x1080 screen (that's a resolution degradation of x4). > > # In addition, > # I think current function style is not so suitalbe other than > # visualizer. They can obtain only latest block of freq/wave > # data when called. It can not make sure block-by-block processing. > # > # I with more reliable block-by-block processing method > # (e.g. callback) if the AnalyserNode is assumed to broader > # application. Entirely true. It should definitely not be used for audio processing. /Marcus > ---------------- > Tatsuya Shinyagaito > gaito@g200kg.com > > -- Marcus Geelnard Technical Lead, Mobile Infrastructure Opera Software
Received on Thursday, 12 December 2013 08:05:59 UTC