W3C home > Mailing lists > Public > public-webevents@w3.org > October to December 2011

Re: Draft Updated Charter adding Mouse Lock and Gamepad

From: Chris Wilson <cwilso@google.com>
Date: Thu, 6 Oct 2011 15:02:49 -0700
Message-ID: <CAJK2wqUGa+Sqnf7bYkJ0a9cJXmKP7WdSZFaLUA_gWGg4V54GeA@mail.gmail.com>
To: robert@ocallahan.org
Cc: Olli@pettay.fi, public-webevents@w3.org
On Wed, Oct 5, 2011 at 4:11 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> MediaStreams will be able to support multiple tracks, including multiple
> tracks of the same type.
> My ProcessedMediaStream proposal doesn't currently support direct access to
> multiple input tracks of the same type in a single input stream, for
> simplicity. However we could add API to split out individual tracks into
> separate streams, and feed those into a ProcessedMediaStream as separate
> input streams. Likewise ProcessedMediaStream can't produce multiple output
> tracks of the same type, but you could use multiple ProcessedMediaStreams
> (sharing the same worker state, even) and merge their results using another
> API to merge tracks from separate streams into a single stream. Or, we could
> add support for processing multiple tracks directly. It depends on the
> use-cases for multi-track processing; I don't understand those yet.

I confess, the multiple tracks topology eluded me.  I'm trying to understand
the common case of "interface supports many audio track i/os, how do I
select them, and what does that turn into in terms of Media Streams?"  Is
there an example that uses the DAW scenario?

>>    1. Probably the simplest approach would be to allow getUserMedia to
>>    return a single MediaStream with 20 audio tracks, and make it easy to split
>>    those out into separate streams if needed.
>> I think you're going to want the same model of "default audio device" that
Windows has here, but yeah.

>>    1. There doesn't seem to be symmetry between input and output of audio
>>    streams - or, really, the output object model is left to <audio>.  With
>>    MIDI, output and input are the same kinds of messages, and (more
>>    importantly) they will likely need to multiplex to the same number of
>>    different places (i.e. a single-output-sink model would not work at all for
>>    anything other than a General MIDI player).  At the very least, this seems
>>    like it would only solve the input problem for MIDI - because the local
>>    output models in Streams are currently just "sink it to an <audio> or
>>    <video>."  Or am I misunderstanding?
>> I'm not sure what alternative outputs you need. Existing
> MediaStreams-related proposals support recording to a (possibly compressed)
> binary blob and streaming over the network via PeerConnection. We can add
> new APIs that consume MediaStreams as needed.

I mean if I write an algorithmic music generator that just wants to spit out
a MIDI message stream, how do I create the output device, and what does the
programming model for that look like?  I think if I'm doing this to output
an audio stream, I write out to a binary blob (less ideal, but marginally
workable for MIDI data) and then hook up the stream to an <audio> (which
then routes it to the default audio device today).  But I don't have a
<midi> element to route the output to (and that has the same
interface-selection needs as input).

>>    1. I'm just slightly nervous by the general idea of treating
>>    processing of MIDI like processing of audio, given that it's not a
>>    consistent stream of temporal data in the same way as audio; it's
>>    instructions. (From http://www.midi.org/aboutmidi/intromidi.pdf: "MIDI...is
>>    a system that allows electronic musical instruments and computers to send
>>    instructions to each other."  Maybe that's okay, but other than the single
>>    scenario of implementing a JS synthesizer (an important one, obviously), I'd
>>    suggest you could similarly apply the same logic and say game controllers
>>    are incoming streams of instructions too.
>> I think any stream of real-time timestamped data could theoretically be
> added as a MediaStream track type. I'm not sure it would make sense to
> include game controller input streams though. In my view, *the critical
> things MediaStreams provide are synchronization and real-time processing
> that's immune from main-thread (HTML event loop) latency*. I think
> generally it won't hurt to simply deliver game controller input to the main
> thread as regular DOM events.

(Emphasis mine.)  I agree, I think a significant value MediaStreams provide
is synchronization and real-time processing immune from main-thread latency.
 My argument has been that that is less important with MIDI, and I'm
concerned about the complexity of the programming model that arises from
this model - enough so that I was thinking along the same lines - generally
it won't hurt to simply deliver MIDI controller input to the main thread as
regular DOM events.  (In Windows, MIDI messages are pumped through a message
loop, frequently on the same thread.)
Received on Thursday, 6 October 2011 22:03:17 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:09:34 UTC