W3C home > Mailing lists > Public > public-webevents@w3.org > October to December 2011

Re: Draft Updated Charter adding Mouse Lock and Gamepad

From: Chris Wilson <cwilso@google.com>
Date: Wed, 5 Oct 2011 16:59:05 -0700
Message-ID: <CAJK2wqXCz8DXtLcFGKTDc9q-zsa2BVojPfPiR87mi2x7M-3ZBw@mail.gmail.com>
To: robert@ocallahan.org
Cc: Olli@pettay.fi, public-webevents@w3.org
More response tomorrow, but on "I know nothing about MIDI" - MIDI is a
series of short messages sent on a serial connection, originally designed to
be the communication protocol between a "keyboard" and a "synthesizer".
 Most messages are either 2 or 3 bytes long.  Each connection is
unidirectional (i.e. for a two-directional communication between two synths,
you need two cables).  There are 16 "channels" on each MIDI connection -
these are purely virtual, most messages have a channel identifier.  To
understand more deeply, look at the first page or so of
http://www.midi.org/techspecs/midimessages.php - this lists the MIDI
messages.  Most of MIDI in practice is key-on, key-off, and controller
messages.  There's also a very simplistic time code for syncing sequencers
to the same place in a song, and a timing clock message - basically, a 1/24
of a quarter-note "tick".  However, it's important to understand that these
are instructional messages - the MIDI stream is still "live" (notes will
sound) regardless of whether time clock is active, or even ever sent.

There are also "system-exclusive" messages, or "sysex" - these are specific
to given manufacturers and frequently specific models, e.g. letting you
send/receive patch configuration data between identical synthesizers (or
sending/receiving to a computer for patch library storage).  The format of
these is defined to a large extent by the manufacturer.

If you think of MIDI as starting from the simplistic "sends one three-byte
message when key goes on, sends another message when the key goes off",
you've got the skeleton of MIDI.  For example, if I press middle C on my
synth set to output on channel 1 (MIDI channels are referred to as 1-16,
even though of course they're represented as 0-15), with a moderate
velocity, it would send:

10010000 00111100 00111111 (binary of the three bytes, as per the message
spec I mentioned above).
First byte is note on (1001) and channel 1 (0000).  Second byte is note
number (0-127, top bit is always 0 - middle C is "60" decimal, so 00111100).
 Third byte is velocity (0-127; again, top bit is always zero, 0x3F is
moderate, so 00111111).

When I release the key, it would send
10000000 00111100 00000000
First byte is note off (1000) and channel 1 (0000).  Second byte is note
number (you can't ever have two of the same note sounding on the same
channel in MIDI).  Third byte is velocity - but many keyboards don't support
release velocity, so just zero.

The short version is "I want to be able to easily write code so I can do
noteOn(channel, key #, velocity) and handleNoteOn(channel, key #,
velocity)."

Do ProcessedMediaStreams need to be in a worker?

On Wed, Oct 5, 2011 at 4:11 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, Oct 6, 2011 at 11:30 AM, Chris Wilson <cwilso@google.com> wrote:
>
>> Hmm.  An intriguing idea, and I can see what you mean by thinking of MIDI
>> synthesizers in JS as being essentially just a stream transform.  I had a
>> few off-the-cuff concerns with that approach, based on my current
>> understanding of Media Streams - can you help me understand these?
>>
>>    1. I'm somewhat confused by how the selection of sources would work in
>>    practice, even for audio/video.  It seems from a quick review like it would
>>    be a very Flash-esque "you have one audio input, one video input, one audio
>>    output (although that's really just an <audio> element)" - i.e., there's no
>>    multiple-input/multiple-output stream handling.  Or am I missing something?
>>     Sorry, Media Streams is rather complex to grok, but I'm working on it.
>>
>>
> MediaStreams will be able to support multiple tracks, including multiple
> tracks of the same type.
>
> My ProcessedMediaStream proposal doesn't currently support direct access to
> multiple input tracks of the same type in a single input stream, for
> simplicity. However we could add API to split out individual tracks into
> separate streams, and feed those into a ProcessedMediaStream as separate
> input streams. Likewise ProcessedMediaStream can't produce multiple output
> tracks of the same type, but you could use multiple ProcessedMediaStreams
> (sharing the same worker state, even) and merge their results using another
> API to merge tracks from separate streams into a single stream. Or, we could
> add support for processing multiple tracks directly. It depends on the
> use-cases for multi-track processing; I don't understand those yet.
>
>
>>    1. On that same topic, it would be quite rare to only have one MIDI
>>    interface (i.e., one 16-channel source/sink) in a studio environment.  I
>>    think I have around 16 at home (i.e. 16 MIDI ins, 16 MIDI outs - each with
>>    MIDI's 16 channels).  I really don't want to pop a "please select which MIDI
>>    interface to use" a la Flash's "this app wants to use your microphone and
>>    camera" as implied by
>>    http://www.whatwg.org/specs/web-apps/current-work/multipage/video-conferencing-and-peer-to-peer-communication.html#obtaining-local-multimedia-contentevery time I load my sequencer app.
>>
>>
> The UA would want some kind of "remember this decision" (on a per-app
> basis) UI.
>
>
>>
>>    1. Similarly, on the audio side I don't think single input stream is
>>    going to cut it for DAW purposes; I have around 20 channels of audio on my
>>    workstation at home (and it's very old and decrepit).
>>
>>
> Probably the simplest approach would be to allow getUserMedia to return a
> single MediaStream with 20 audio tracks, and make it easy to split those out
> into separate streams if needed.
>
>
>>    1. I'm concerned by the content model of different streams - i.e. how
>>    easy it would be to make the MIDI stream object model expose typical short
>>    messages and sysex messages, such that you can easily fire the key on/off
>>    "events" that they represent.
>>
>>
> I don't understand this one (I know nothing about MIDI).
>
>
>>
>>    1. There doesn't seem to be symmetry between input and output of audio
>>    streams - or, really, the output object model is left to <audio>.  With
>>    MIDI, output and input are the same kinds of messages, and (more
>>    importantly) they will likely need to multiplex to the same number of
>>    different places (i.e. a single-output-sink model would not work at all for
>>    anything other than a General MIDI player).  At the very least, this seems
>>    like it would only solve the input problem for MIDI - because the local
>>    output models in Streams are currently just "sink it to an <audio> or
>>    <video>."  Or am I misunderstanding?
>>
>>
> I'm not sure what alternative outputs you need. Existing
> MediaStreams-related proposals support recording to a (possibly compressed)
> binary blob and streaming over the network via PeerConnection. We can add
> new APIs that consume MediaStreams as needed.
>
>
>>    1. I'm just slightly nervous by the general idea of treating
>>    processing of MIDI like processing of audio, given that it's not a
>>    consistent stream of temporal data in the same way as audio; it's
>>    instructions. (From http://www.midi.org/aboutmidi/intromidi.pdf: "MIDI...is
>>    a system that allows electronic musical instruments and computers to send
>>    instructions to each other."  Maybe that's okay, but other than the single
>>    scenario of implementing a JS synthesizer (an important one, obviously), I'd
>>    suggest you could similarly apply the same logic and say game controllers
>>    are incoming streams of instructions too.
>>
>>
> I think any stream of real-time timestamped data could theoretically be
> added as a MediaStream track type. I'm not sure it would make sense to
> include game controller input streams though. In my view, the critical
> things MediaStreams provide are synchronization and real-time processing
> that's immune from main-thread (HTML event loop) latency. I think generally
> it won't hurt to simply deliver game controller input to the main thread as
> regular DOM events.
>
> Rob
> --
> "If we claim to be without sin, we deceive ourselves and the truth is not
> in us. If we confess our sins, he is faithful and just and will forgive us
> our sins and purify us from all unrighteousness. If we claim we have not
> sinned, we make him out to be a liar and his word is not in us." [1 John
> 1:8-10]
>
Received on Thursday, 6 October 2011 00:06:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:54 UTC