W3C home > Mailing lists > Public > public-xg-audio@w3.org > December 2010

Re: Some thoughts

From: Chris Rogers <crogers@google.com>
Date: Fri, 3 Dec 2010 12:32:33 -0800
Message-ID: <AANLkTinC2tx2sqqtAD-YWFhfJsdiBPRJFG_xd8uw3jc8@mail.gmail.com>
To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Cc: public-xg-audio@w3.org
Hi Jussi, thanks for having such a detailed look.  I'll try my best to
address your comments.

On Fri, Dec 3, 2010 at 5:52 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> Hey guys,
> Lately I've invested really a lot of time and thought to the research of
> our Web Audio Api, and there are some things that are disturbing me, so
> I thought I'd bring them up. But first I'd like to say I really really
> appreciate the work that's been done here, it's awesome and I actually
> hope you'll prove my points to be wrong.
> So, I'll just start listing things:
> First: Synchronization. Say I have an AudioParam that is being modulated
> with an AudioCurve. Cool. What if I want to add a UI that controls it?
> According to the specification it would seem to me that it would
> actually change the parameter only every time the AudioContext asks for
> a buffer, so if I for example move a slider, the value changes get
> stacked up to the next buffer change, which would introduce audible
> edges in the parameter change, if I change the cutoff parameter slowly,
> ie. Am I mistaken? This is also a big concern for midi events, which
> brings me to my next point:

I think you're asking about parameter smoothing (de-zippering) which is an
important implementation detail.
The current implementation *does* smooth parameter changes at the k-rate
(which is currently 128 sample-frames @44.1KHz).
Additionally, it smooths any volume changes at the audio rate, so these
changes will be very very smooth.

I haven't implemented or really written very much about AudioCurve for doing
automation, although we've talked about it somewhat
on the mailing list and in the tele-conferences.  The idea would be that the
implementation smooths these changes at least
at the k-rate, and possibly at the audio rate if that particular parameter
could benefit from such high precision.  I'm looking forward
to working more concretely on the actual AudioCurve API in the near future.

> MIDI. Yes, we can always create VMKBDs or cool touch interfaces, but if
> this is to be used in music production, MIDI is a must. I understand we
> cannot achieve support for external MIDI devices, but it's going to come
> sooner or later. So I'm saying, let's not make the system crippled from
> the beginning, we've seen too many examples of that in the audio area. I
> think we want this to be as ready to the future possibilities as is
> possible, that, for me, means implementing built-in support for MIDI
> events, even though we can't yet receive them. This would help with the
> fact that there are in-browser VMKBD implementations and MIDI file
> readers, and so that the support would already be there when we have the
> actual devices too, not forcing the developers to change the whole
> architechture they built on the last time (Like VST and DirectX have
> done).

I think it's easy to imagine a MIDI API where MIDI events are received very
similar to key and mouse events.
Once these events are received, it can then call into the existing audio API
to play notes, change parameter values, and so on.
So, although a MIDI API does not yet exist, I think it would integrate into
the existing architecture fairly nicely.

> Third thing is that now we have MODULES that are connectible, however
> the ideal situation, IMO would be that we don't connect modules, we
> connect ports, just like in analog audio. Say there are three port
> types, Audio, Midi and Param and these all have outputs and inputs which
> can be connected. This for me, is a much more flexible and modular
> environment, which I think is something that we should achieve with our
> work. You can see what I mean by visiting my Modular Synth project at
> http://niiden.com/jstmodular/ (FF4 only).

I think somebody else has suggested something similar, that in addition to
supporting audio inputs and outputs for connections,
that parameters could also be connected.  For the moment, in my current
implementation I'm trying to keep things
simpler and see how we can achieve the functionality in other ways.  As
things move along a bit further and we have a chance
to play with the APIs more, then it may become clearer what the best
approach will be.

> I know all this seems a little bit late after all the hard and great
> work Chris has done and everyone here has agreed upon, but I really
> resist the idea of making a system that is already... outdated (sorry)
> on it's release.
> Best Regards
> Jussi Kalliokoski
> P.S. Please don't hate me for this, I felt like I had to bring this
> up. :/ I would hope this is regarded as constructive criticism, and a
> place for further discussion.

Jussi, no problem.  I'm happy to get your feedback!  In the very near term,
it will be difficult for me to make any changes
to my current implementation at this stage of the development cycle.  I
think the API has been through quite a bit of review by
a number of people and will be a good first step, but we can hone and
perfect it as time goes on.

Received on Friday, 3 December 2010 20:33:06 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:37:59 UTC