- From: James Ingram <j.ingram@netcologne.de>
- Date: Mon, 30 Nov 2015 11:52:35 +0100
- To: Chris Wilson <cwilso@google.com>
- Cc: "public-audio-dev@w3.org" <public-audio-dev@w3.org>
Am 28.11.2015 um 15:07 schrieb Chris Wilson:
> I'm not really sure what you mean by "standard" and "non-standard".
Sorry, I was trying too hard to be succinct. :-) Here's what I really
meant:
Synth controls are either defined in the MIDI standard (e.g. "set pan
position" [CC10/42]) or they are not (e.g. "set the waveform of
oscillator 2"), and the API for Web MIDI Synths needs to allow for both
categories.
Gree's soundFont synthesizer only implements standard MIDI controllers,
including the changing of GM presets. But it by no means implements them
all.
Most of the the controls that your (Chris') synth implements are not in
the MIDI standard.
> The prime need for this, imo, is to resolve what we need to expose as
> an API...
Host applications need to know which controls are going to react to
which messages, so I added a controls declaration to the synths' API.
I think that's a MUST. If the host sends a control message to a synth
that hasn't implemented it, then the synth should throw an exception.
Exactly *how* the declaration is formulated needs standardizing: Should
there be separate attributes for "controls" and "customControls"? How,
exactly, should the custom control attributes be named?
Custom Controls:
I'm rather sceptical of the standard MIDI controls API. It was designed
in the 1980s for hardware devices. We now have 30 years more experience
designing interfaces, and are talking about software. That's a different
ball game.
The standard includes the general "non-registered parameter" control [CC
99/96] that is supposed to allow for non-standard controls, but why
should software have to implement that (and everything it entails -- the
Data Button controls etc.) rather than just telling the host directly
which controls it has implemented? It would be much more work, at
programming-, load- and run-times, than just implementing the control,
declaring it and using it.
A similar situation exists with setting pitch wheel deviation. I think
this was an oversight in the original standard, which (inefficiently)
requires the host to send a sequence of "registered parameter" controls.
Why should a software synth have to implement the "registered parameter"
control? In fact Gree decided not to, and I think they were right. They
just implemented a "set pitch wheel deviation" control. Much simpler for
everyone, and much more efficient.
Hardware may be stuck with the 1980s standard, but software has to look
out for itself. As in Javascript, I think we should use MIDI's good
parts, and deprecate the not-so-good parts. For 21st century software,
I'd start by deprecating all the controls that are unnecessary and/or
lead to inefficiency (e.g. "non registered parameter") and anything that
has no precise meaning (e.g. "general purpose button 1"). Open to
discussion, of course! :-)
---
I've taken another look at the Web MIDI API issues you mention:
The important thing is to differentiate very clearly between browser and
Web Audio implementations of the Web MIDI Output Device API.
Issue #110 <https://github.com/WebAudio/web-midi-api/issues/110>, is
originally asking about hardware in the browser's implementation of the
Web MIDI API. The question may not be solvable there, but as it stands
in my API for Web Audio implementations, software synths that support GM
instruments declare a setSoundFont function. Those that don't don't.
Note that supporting GM instruments does not mean that the whole MIDI
standard is implemented. An interesting question remains: Should
software synths be removable? There could well be advantages in being
able to garbage collect a disused software synth, but can't that be done
just by settting the synth to null? What browsers do is off-topic for
this forum. { Sorry, I couldn't resist... :-)) }
(Apropos: This is supposed to be a forum for developers. Where are they
all? Maybe we should be talking on the other list. After all we *are*
talking about *implementing* (part of) the Web MIDI API. I think I'm
going to dare a cross-posting... :-)
Issue #45 <https://github.com/WebAudio/web-midi-api/issues/45> doesn't
seem to be a problem for Web Audio synths. They are just ordinary Web
Audio applications that don't need any special ports. Looks to me as if
browsers could close this issue too...
Yes, its very important for synths to run in Workers. If synths run in
their own threads, then hosts can create multiple instances and run them
in parallel. Hosts can then create banks of single-channel synths (like
yours) and do the channel handling themselves. They can also use
completely different synths simultaneously, if they want to.
> As for Chrome's "decision" to ban the GS synth in Windows - that
> wasn't really a decision. It was crashing the browser process,
> without user intervention. I expect it will get re-enabled (issue
> #150 <https://github.com/WebAudio/web-midi-api/issues/150>) if the
> user wants it, but we can't let external code be run without the user
> being asked. That said, I expect a Service-Workered virtual synth is
> going to be the best pre-installed synth we can hope for.
That's really off-topic for this forum too. :-)) But to wrap that topic
up properly:
1. I have to confess publicly that I was the one who ran into the fatal
bug [1].
2. The Microsoft GS Synth is still working on Firefox+Jazz+WebMIDIAPIShim.
3. I have to say that I don't regret the Synth's passing too much. It
was great to have around a couple of years ago, but it had a fixed set
of sounds and awful latency problems. And there's no guarantee that MS
wont axe it themselves at some point. Solving the basic problem for all
operating systems is much more important than getting it back.
All the best,
James
[1] https://code.google.com/p/chromium/issues/detail?id=499279
Received on Monday, 30 November 2015 10:53:09 UTC