W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Comments on draft MIDI API

From: Dominique Hazael-Massieux <dom@w3.org>
Date: Tue, 26 Jun 2012 16:51:44 +0200
Message-ID: <1340722304.3217.775.camel@altostratustier>
To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Cc: public-audio@w3.org
Le mardi 26 juin 2012 à 17:36 +0300, Jussi Kalliokoski a écrit :

>         * sequence<> shouldn't be used as attributes (but instead
>         arrays should
>         be used) in MIDIEvent
> 
> I'm not sure I understand the difference, I thought sequence will be
> implemented as an array? 

"Sequences are always passed by value" hence "Sequences MUST NOT be used
as the type of an attribute", cf
http://dev.w3.org/2006/webapi/WebIDL/#idl-sequence
 
>         * I'm surprised that the enumerate*/get* methods on MIDIAccess
>         seem seem
>         to synchronous; I would expect that these operations would
>         require
>         enough time that they shouldn't block the main thread; also,
>         it doesn't
>         look like the API allows to deal with plugging/unplugging MIDI
>         devices
>  
> I don't see how these operations could possibly be long enough to
> block the main thread for long enough to warrant for an asynchronous
> behaviour, it's highly unusual for an operation like this to be even
> one millisecond long.

OK; so that means at any point in time, the browser (and thus the OS)
always know which devices are (or can be) plugged?

> And the API does allow for (un)plugging. [1]

But only for devices that the app already knows about and keeps track of
— or am I missing something? To be more concrete, how would a Web app
know that the new MIDI keyboard I just bought just got plugged in?

>         * Uint8Array is not defined in WebIDL; I guess it comes from
>         WebGL, but
>         then that spec should probably be referred
> 
> The WebGL Typed Arrays spec is already referred to. [2] 

Oh, indeed, I had missed that. But both Typed Array and High Resolution
Time  should appear in the normative references as well. 

>         * MIDIMessage should probably be a dictionary rather than an
>         interface
> 
> Why & how? 

Hmm... Actually, I see that MIDImessage appears as part of an attribute,
so maybe that's not a good idea actually...

(why? because for just collection of properties, dictionaries are a
better fit; how? by replacing  "[NoInterfaceObject] interface" with
"dictionary" and removing the "attribute" keyword from the definition)

>         At a higher level, it seems like this API has a very strong
>         fingerprinting risk (due to enumeration of devices, esp. as
>         they
>         themselves have a fingerprint property).
> 
> Indeed, hence the security model of asking user permission via
> getMIDIAccess.

But that seems a bit weak — it's not because I'm ready to share one MIDI
device with a Web app that I would ready for the Web app to know all the
devices available on my computer — or is the intent that the list of
enumerable devices be filtered by the user as part of the permission
request?

>         Also, I idly wonder if getting input from MIDI devices
>         shouldn't be done
>         via getUserMedia rather than through its own API; I'm not
>         really sure
>         how that would work, but I thought I would still share the
>         question.
> 
> This has been discussed a bit already, and the original security model
> I proposed used getUserMedia, but the concensus (for now) is that MIDI
> doesn't make that much sense as streams (by default). Instead there
> are plans to integrate more with the MediaStreams API in a later
> version, for example giving all MIDI input devices a .stream
> attribute, but for now there isn't much gained value. We'll revisit
> this when the MediaStreams API is more widely adopted and we can
> evaluate benefits such as P2P MIDI communication, cross-context
> transfer of MIDI events etc.

I guess there are two aspects: dealing with mediastream (for which I can
see that they don't bring a lot of values for MIDI devices) is the one
you're responding to; the other one is having a separate API call
(getMIDIAccess) for something that at least in theory is part of the
semantics of getUserMedia. It might be that the first aspect determines
for sure that the second one should be overruled, but it might be useful
to bring that up for discussion with the MEdia Capture Task Force?

(I guess one reason might be that getUserMedia should really be
getUserMediaStreams :)
 

Dom

>         
>         1.
>         https://dvcs.w3.org/hg/audio/raw-file/tip/midi/specification.html#event-midiinputdevice-midimessage
>         
> 
> Cheers,
> Jussi 
> 
> 
> [1]
> https://dvcs.w3.org/hg/audio/raw-file/tip/midi/specification.html#idl-def-MIDIDevice
> [2]
> https://dvcs.w3.org/hg/audio/raw-file/tip/midi/specification.html#terminology 
> 
> 
Received on Tuesday, 26 June 2012 14:52:04 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 26 June 2012 14:52:04 GMT