W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: MIDI Tracks and Sequences (API Proposal)

From: James Ingram <j.ingram@netcologne.de>
Date: Sat, 15 Sep 2012 14:47:06 +0200
Message-ID: <505478CA.5000102@netcologne.de>
To: Chris Wilson <cwilso@google.com>
CC: Joseph Berkovitz <joe@noteflight.com>, public-audio@w3.org, Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Hi Chris, all,

(I've replied to Chris's other post separately)

Am 14.09.2012 18:04, schrieb Chris Wilson:
> On Fri, Sep 14, 2012 at 8:52 AM, James Ingram <j.ingram@netcologne.de 
> <mailto:j.ingram@netcologne.de>> wrote:
>     Yes, I understand that, but think that the data structures
>     underlying Tracks and Sequences are so basic that it would be hard
>     to see them becoming restrictive. 
> The real question would be "what is the value of codifying those data 
> structures into a standard?".

Well, they would not really need to be "coded into the standard", their 
implementation could be hidden behind the relevant parts of the API. To 
recap a little more precisely, I think that the Web MIDI API should 
include support for the following abstractions:

Sequence: a collection of Tracks
Track: a collection of MIDIMoments
MIDIMoment: a collection of MIDIMessages having the same timestamp

I'm not sure, but maybe a MIDIMoment is what the current spec is calling 
a MIDIEvent.

These are simple, intuitively graspable concepts, which every MIDI 
programmer can understand, and I don't think they are going to change 
over time.

It would be up to the implementation to decide how to implement the 
collections. I could well imagine a Sequence containing a simple array 
of Tracks, and Tracks being kept as sorted, linked lists. MIDIMoments 
could be defined to play "as fast as possible" in order of the message 
index. Implementations are then in control of what "as fast as possible" 
means (apropos "throttling")... Hiding things like that from MIDI 
programmers is a bit of unexpected extra value... :-)

The main reason I want to nail these abstractions down is that I think 
that trying to play Sequences in the user thread is a mistake. Player 
threads really need to sleep between sending messages, and sleeping is 
something the user thread is not allowed to do.
Also, MIDI programmers should always expect maximum accuracy, and should 
not have to fiddle with variables to get it. Accuracy should be the 
system's problem, not the MIDI programmers'.

A sequence.play() function would initiate one or more worker threads 
(I'd be inclined to have one thread per track), and play the tracks back 
using ordinary sleep() functions. That would be much less hassle, and 
much more accurate, than forcing MIDI programmers to use setInterval() 
or setTimeout() in the user thread.
The sequence.play() function needs to know exactly how the data 
structures are organised, but the MIDI programmers don't.

Perhaps it would help if I outline what I think ought to be in the API. 
These details would need discussing, of course, but I think the 
following list is pretty complete:

Constructors (empty objects):

midiMoment.appendMIDIMessage(MIDIMessage) // returns false if the 
timestamp is wrong

Player functions:
sequence.playSection(fromTimestamp, toNotIncludingTimestamp);
sequence.play() // shorthand for sequence.playSection(0, infinity);
sequence.stop() // stop and rewind to the current fromTimestamp
sequence.pause() // stops without rewinding
sequence.resume() // play from the current timestamp up to the current 

I'd also like to have a way of filtering the tracks, so that not all of 
them are actually played.
Perhaps like this:
track.plays(boolean) // default is true

That's all! :-)


Received on Saturday, 15 September 2012 12:47:46 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:02 UTC