W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2013

[Minutes] Audio face-to-face meeting, 26/27 March 2013

From: Olivier Thereaux <Olivier.Thereaux@bbc.co.uk>
Date: Wed, 3 Apr 2013 13:45:50 +0000
To: "public-audio@w3.org" <public-audio@w3.org>
Message-ID: <CD81F11C.4C5E%olivier.thereaux@bbc.co.uk>
Dear all,

The raw minutes of our recent face-to-face meeting are available online at:



Many thanks to all the participants, especially those who travelled a long
way to attend. I can honestly say this was one of the most intense and
effective face-to-face meetings I've ever attended. I hope you found it
useful too.

Also, many thanks to Google for hosting us again. The location was
fantastic, and logistics near perfect.

I have written up a summary (plain text, below) of the meeting.
It is also on the wiki at:

Feel free to make small amendments if you notice anything wrongly minuted.
Discussions on issues should happen on individual issues pages in
bugzilla, or in separate threads on the mailing-list.


## Web Audio API V1 / Feature freeze

First meaty agenda of the day was to discuss whether/how we could split
the web audio API spec, in order to make it easier to implement. We looked
at a strawman split worked on by people at Mozilla from features known to
be useful to game developers (one of our primary constituencies, see Use
Cases & Requirements document)

The strawman list was
(Version 1)
- AudioContext
- OfflineAudioContext (at risk)
- GainNode
- AudioBuffer
- AudioBufferSourceNode
- MediaElementAudioSourceNode
- MediaStreamAudioSourceNode
- ScriptProcessorNode
- PannerNode
- DynamicCompressorNode
- BiquadFilterNode
- DelayNode

(Not sure)
- ConvolverNode
- WaveShaperNode
- OscillatorNode

(Post v1)
- AnalyserNode
- ChannelSplitterNode
- ChannelMergerNode

The group could not reach consensus on whether this would be a valuable
split. There was however consensus on three things:

- all the features currently in the spec are valuable and would be useful
to have sooner or later
- removing one or two types of node will not increase the ease of
implementation dramatically
- we should put our effort into better specifying all the types of nodes
and implementation will be much easier

## Web Audio API walk through and live edit


- Resolution: WebIDL should be construed as normative
- Resolution: Implementations are to omit functions from DOM bindings that
are not implemented (e.g. createXXXNode where XXX isn't supported)

Issues generated/modified :

- Remove empty AudioSourceNode interface

- Introduction: link "use cases" to the stable Use  Cases document

- Features list need updating to reflect current  contents of spec

- API Overview is missing some interfaces

- Conformance section: need to note use of MUST that  is "RFC-legal" [...]

- Remove "Terminology and Algorithms" section

- Remove AudioContext constructor code example

- Need way to determine "performance.now()" time of current audio output

- Deprecate AudioContext.createBuffer

- decodeAudioData: optional 4th argument

- decodeAudioData Prose: avoid video containers that have an audio track

- Specify all exception types

- Make AudioContext and AudioNode Lifetime sections informative

- AudioDestinationNode does not always talk to audio hardware

- Mandate a useful accepted sampling rate ranges for buffers created
through AudioContext.createBuffer
(mentioned, TBD)

- Remove sentence: "The decodeAudioData() method is preferred over the

- Add normative reference to XHR spec

- Modifying the ArrayBuffer passed to decodeAudioData

- OfflineAudioContext should be event target

- how do multiple offline/online contexts interact

- Allow Shared audio buffers between contexts

- OfflineAudioContext renders as quickly as possible (not real time)

- Proposed: recorderNode

- AudioNode Interface - text for Fan-In is out of date

- AudioNode - block size limits

- AudioNode Attributes - remove mention of AudioSourceNode

- Add detail of connecting audio node to non audio node

- Define the behaviour when disconnect called on an audio node connected
to an audio param

- Channel count missing in IDL for AudioNode

- Move information on multi channel to audio node definition

- Review 32 channel limitation on scriptProcessor, buffer and destination

- Specify how DelayNode deals with changes of inputs and buffers while live

- AudioParam - min/maxValue, intrinsic value, computedValue

- Clarify "dezippering" for AudioParam

- AudioParam - add explanation of a/k rate to cross reference in node

- Record all documentation that is considered developer documentation

- AudioProcessingEvent - remove node attribute

- PannerNode - include informative note on HRTF, point to reference/open

- PannerNode - add information on why the panner is hard coded to 2
channel only

- BiquadFilterNode is underdefined

- BiquadFilterNode - Missing default values

## Web Midi

Chris Wilson walked us through the history of this spec, how he realised
that our work on web audio would be a great opportunity to connect
controllers (cheap and plenty) and build synthethisers, music apps on the
web. There are concerns that the API will not be high on implementation
lists but in the meantime, there is a shim (working with the Jazz midi
plugin - Jazz MIDI plugin - https://github.com/cwilso/WebMIDIAPIShim ).

## Testing

We established that our goal for the time being was to focus on testing
the quality (and testability) of the spec and interoperability of
implementations. Testing "quality of implementation" (which is a very hard
question for audio) or testing how our APIs integrate with the rest of the
web platform is somewhat out of scope at the moment.

A short discussion was followed by demos:

- Chris Rogers gave a tour of the webkit tests and how they are organised
- Chris Lowis showed his current effort to create interface tests from IDL
- Ehsan showed the Mozilla test framework

It was noted that the Mozilla and Webkit test harnesses are somewhat
similar, and that it should be reasonably easy to translate these tests
into the w3c harness. The fact that some of the webkit tests require
OfflineAudioContext (a feature which Moz considers potentially at risk)
was raised but not solved.

## Guest session

We split our guest session in two:

- Jory from the html5audio blog/twitter came to talk about interactions
with the games dev community, and talked about games dev engines -
libraries which support web audio. Something similar to
http://threejs.org/ would be valuable.

- Erik and Neil from Khronos also came to talk about the relationship
between OpenSL and Web Audio API. The group working on OpenSL has been
listening to interest for a web port. They also worked on use cases, which
largely overlapped with the ones our group built. There was rapid
agreement that it would be a bad thing to have a "webSL" and "web audio
API" compete in this space, but it could be a good thing if a web version
of OpenSL ES built upon the web audio API, abstracting away the node/graph

## vendor prefixes & "deprecated" interface

Back to discussion of "big issues" with the web audio API, we tackled the
question of vendor prefixed and "deprecated" interfaces (such as noteOn
and noteOff). The group first leaned towards changing the deprecation text
to remove the recommendation to implement the deprecated methods but after
further discussion and looking into more details about the current state
of implementation and usage, we reached the following resolution:

RESOLUTION: We'll support the two interfaces in the spec, have the noteOn
etc in a separate section. Change the deprecation methods to "alternate
names" and explain that they exist in the spec for historical reasons.

There was also general consensus that vendor prefixing should be  taken
away from documentation and articles as soon as implementations catch up.
Ditto for "new names" and interfaces, which will be prioritised in all

## Web workers

We looked at the part of MSP dealing with processing in workers (
http://www.w3.org/TR/streamproc/#stream-mixing-and-processing ) and agreed
to add a way to use ScriptProcessorNode with workers, in a similar

The discussion continues on bugzilla:

## Next call date

Given the time it will take to address and follow up all the issues raised
during the meeting, as well as a few other calendar considerations, we
agreed our next teleconference will be on April 25th, same time as usual.


Before adjourning, we had a short discussion about the upcoming TPAC in
Shenzen, China. The group did not yet decide whether it will meet there,
but there was general agreement that whether or not we meet, some presence
at the event for developer advocacy would be a good thing.



This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
Received on Wednesday, 3 April 2013 13:46:29 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:18 UTC