- From: Raymond Toy <rtoy@google.com>
- Date: Fri, 04 May 2018 20:06:27 +0000
- To: glengineered@gmail.com
- Cc: "public-audio@w3.org Group" <public-audio@w3.org>
- Message-ID: <CAE3TgXHPjo9Bo=-GBRGnrHPwXpg074e7MAA1y27T_GGAjOBDCQ@mail.gmail.com>
On Fri, May 4, 2018 at 9:24 AM Glen Pike <glengineered@gmail.com> wrote: > Hi, > > I'm a 'user' of the WebAudio API, not a heavy user currently, but more of > a tinkerer with a background in audio technology. > > I'd previously managed to re-create an experiment from many years ago > where I was able to create my own wave-table based synth, but when I > revisited this experiment recently, I noticed that the setWaveTable > function had been deprecated in favour of setPeriodicWave. > > I'm emailing to ask for a bit of a background as to why this was done? > Are you sure setWaveTable did what you wanted? I think setWaveTable was just renamed to setPeriodicWave because it confused people into thinking this was some kind of wave-table synthesis when in fact it's really just the Fourier coefficients, as you point out. An alternative approach would be to use an AudioWorkletNode to do what you want. Or an AudioBufferSourceNode, as pointed out in another message. > > To me, as a semi-lay-person with audio, I would best describe a sound as a > set of samples - for example, I might want to make a sampler to trigger > various snippets (can be done with other nodes, I know), or produce a > wavetable that my browser doesn't have the capability to generate because > it comes from lots of effects, etc. > > Now, I know it's possible to describe basic / fundamental wave-forms as > polar co-ordinates, but I always struggled with this level of DSP - it was > far easier for me to bang out a for-loop to generate a load of samples, > than to try and describe it mathematically. I know that I can still do > this if I want to and pass my samples through an FFT to get the > polar-coordinates, but to me that seems a bit backwards, considering it's > going to be passed through another IFFT in order to become samples again. > > I feel that I speak for many people who would visualise sound best as a > series of samples over a mathematical function. > > I can only assume that one of the reasons may be to prevent 'incorrectly > described' waveforms, i.e. glitchy stuff from getting into the audio chain, > but some of us like that sort of stuff... > > I hope you can enlighten me as I'd love to see that function reinstated. > > Thanks > > Glen Pike >
Received on Friday, 4 May 2018 20:07:08 UTC