- From: <bugzilla@jessica.w3.org>
- Date: Fri, 16 Nov 2012 22:58:54 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19975 --- Comment #2 from Chris Wilson <cwilso@gmail.com> --- (In reply to comment #1) > (In reply to comment #0) > > It occurs to me that it would be much simpler to get rid of the MIDIMessage > > object altogether, and promote timestamp and data to be on the MIDIEvent. > Sounds like a great idea, at least in theory! I'd very much like the > simplicity of this. However, the idea of having multiple messages with > different timestamps in the same event was designed with performance as the > main concern. While I'm not deadly concerned with the performance of sending > messages as it's not a proven problem, this is. > > Firing events in the browser's main thread is not just suspect to latency > (setTimeout(..., 0) anyone? or rendering tasks blocking the event), but also > quite expensive. On a medium spec desktop, having an event listener for > mousemove updating a text box with the coordinates can take 50% of CPU when > the user is moving the mouse. And browsers throttle the mousemove events. Actually, that's not really the case. (The latency, of course, is why we have timestamps at all, and I don't question that.) The problem, in your mouse case, is typically that the text box updates are taking a ton of time, not the event firing. (At least, that's my experience - I'd welcome further insight. My Chrome eng friends assure me the event firing is quite fast; IPC can be expensive, but in this particular case, if you get the events batched up (e.g. as a packetlist from CoreMIDI), you could do IPC once and then JS event dispatching multiple times.) > A sequencer might easily be receiving messages a lot more often than the > mousemove events fire and might even do more expensive things, and I don't > even know how one would start throttling the MIDI events in JS (like you can > do for mousemove) without incurring even more cycles. You couldn't, really. But that's of general concern; if you take so long processing MIDI events that you start growing a backlog, that's going to happen in any case. The real question is whether adding more event callbacks is going to significantly contribute to the problem or not. Note that on Windows, you're likely not going to be batching these anyway - short messages are delivered individually, one at a time, as message to the messageproc - on CoreMIDI, the Packet is a single timestamp, but the packetlist has multiple timestamps. Of course, realistically, you're not likely to see THAT much incoming data batched together. On USB-MIDI, same thing: you're getting an incoming stream of MIDI message bytes. You (Web MIDI implementer) can batch messages, but there's no timestamping on the incoming data, so if you do this you're likely to batch it with a single timestamp. > The current design, however, allows the UA to be aware of upcoming events > ahead of time and use that information to optimize the amount of events > fired and when they're fired, since the current design doesn't even dictate > that the MIDI messages attached to the event have the same timesamp. This > might be crucial to allow reliable scheduling of for example Web Audio API > events related to the MIDI messages, especially in graphics-heavy sequencer > applications. I'm confused by this set of statements, starting with "upcoming events ahead of time". Do you intend future events to be pushed through somehow? How would you ever be able to get a lookahead? That would only seem possible with virtual ports (which we dropped as a v1 goal), or am I missing something? -- You are receiving this mail because: You are the QA Contact for the bug.
Received on Friday, 16 November 2012 22:58:55 UTC