- From: <bugzilla@jessica.w3.org>
- Date: Sat, 17 Nov 2012 16:49:16 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19975 --- Comment #3 from Jussi Kalliokoski <jussi.kalliokoski@gmail.com> --- (In reply to comment #2) > (In reply to comment #1) > > (In reply to comment #0) > > > It occurs to me that it would be much simpler to get rid of the MIDIMessage > > > object altogether, and promote timestamp and data to be on the MIDIEvent. > > Sounds like a great idea, at least in theory! I'd very much like the > > simplicity of this. However, the idea of having multiple messages with > > different timestamps in the same event was designed with performance as the > > main concern. While I'm not deadly concerned with the performance of sending > > messages as it's not a proven problem, this is. > > > > Firing events in the browser's main thread is not just suspect to latency > > (setTimeout(..., 0) anyone? or rendering tasks blocking the event), but also > > quite expensive. On a medium spec desktop, having an event listener for > > mousemove updating a text box with the coordinates can take 50% of CPU when > > the user is moving the mouse. And browsers throttle the mousemove events. > > Actually, that's not really the case. (The latency, of course, is why we > have timestamps at all, and I don't question that.) The problem, in your > mouse case, is typically that the text box updates are taking a ton of time, > not the event firing. (At least, that's my experience - I'd welcome further > insight. My Chrome eng friends assure me the event firing is quite fast; > IPC can be expensive, but in this particular case, if you get the events > batched up (e.g. as a packetlist from CoreMIDI), you could do IPC once and > then JS event dispatching multiple times.) Good to know! My statement was based on my own empirical experience of every quick-firing event being very expensive, but it's probably just what they do. But I doubt it's less expensive to fire a lot of events instead of batching them into one. > > A sequencer might easily be receiving messages a lot more often than the > > mousemove events fire and might even do more expensive things, and I don't > > even know how one would start throttling the MIDI events in JS (like you can > > do for mousemove) without incurring even more cycles. > > You couldn't, really. But that's of general concern; if you take so long > processing MIDI events that you start growing a backlog, that's going to > happen in any case. The real question is whether adding more event > callbacks is going to significantly contribute to the problem or not. > > Note that on Windows, you're likely not going to be batching these anyway - > short messages are delivered individually, one at a time, as message to the > messageproc - on CoreMIDI, the Packet is a single timestamp, but the > packetlist has multiple timestamps. Of course, realistically, you're not > likely to see THAT much incoming data batched together. > > On USB-MIDI, same thing: you're getting an incoming stream of MIDI message > bytes. You (Web MIDI implementer) can batch messages, but there's no > timestamping on the incoming data, so if you do this you're likely to batch > it with a single timestamp. > > > Snip. > > I'm confused by this set of statements, starting with "upcoming events ahead > of time". Do you intend future events to be pushed through somehow? How > would you ever be able to get a lookahead? That would only seem possible > with virtual ports (which we dropped as a v1 goal), or am I missing > something? Heheh, haven't you heard of my new AI algorithm predicting any future MIDI event from the first two? ;) Jokes aside, I was referring to the UA having heuristics to determine when the main thread is going to be blocked for a serious amount of time. (e.g. requestAnimationFrame) For example, there's a drawing operation that triggers a reflow. This can take a serious amount of time, and during that time MIDI messages may have piled up and can just be batched up into a JS array and sent as a single event. The memory imprint is smaller and the messages are more likely to be processed sooner rather than later (what if during scheduling all the events to fire, another event, such as another drawing callback, is registered to the event queue? Then the processing of the MIDI messages will be delayed in between by the drawing operation). Anyway, if you have cases where the current model would perform worse than the one you suggested, please let me know. Actually the model you suggested was also on my mind when I stripped down the MIDIMessage interface to just data and timestamp, but I thought the current model gives more breathing room for potential optimizations. This discussion actually reminds me... We should give some thought to giving the MIDIAccess interface to a Web Worker. If outside the main thread none of my concerns would be pressing enough to justify having the behavior a bit more complex. -- You are receiving this mail because: You are the QA Contact for the bug.
Received on Saturday, 17 November 2012 16:49:17 UTC