- From: John Lin <jolin@mozilla.com>
- Date: Thu, 28 Aug 2014 11:10:14 +0800
- To: robert@ocallahan.org
- Cc: "public-media-capture@w3.org" <public-media-capture@w3.org>
- Message-Id: <F973FC08-7B3E-4A77-9BC3-0EA6468D19EB@mozilla.com>
Robert O'Callahan <robert@ocallahan.org> ©ó 2014/8/28 ¤W¤È10:10 ¼g¹D¡G > On Thu, Aug 28, 2014 at 1:49 PM, Robert O'Callahan <robert@ocallahan.org> wrote: > With the above "definition", ChannelSplitterNode is not a problem; the 0'th output (the first channel) of audioNode gets recorded and the others are ignored. We'd just need to say that in the spec. > > Let me clarify that: > > "new MediaRecorder(audioNode)" records the first output of audioNode (where "output" is defined in the Web Audio spec). > > For those unfamiliar with Web Audio: currently only ChannelSplitterNode has more than one output. Each output can have any number of channels. > > BTW AudioDestinationNode has 0 outputs so I think if someone creates a MediaRecorder for that node we should just throw an exception. Alternatively we could change the Web Audio spec so it actually has an output (the mix of its inputs). I actually prefer the latter since it's more DWIM; I'll post to public-audio. Is it necessary to give AudioDestinationNode an output? Couldn¡¦t we just say that MediaRecorder records the mix of inputs of the destination node for AudioContext, and OfflineAudioCompletionEvent.renderedBuffer for OfflineAudioContext? > > Rob > -- > oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo > owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo > osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo > osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o¡¥oRoaocoao,o¡¦o oioso > oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo owohooo > osoaoyoso,o o¡¥oYooouo ofooooolo!o¡¦o owoiololo oboeo oiono odoaonogoeoro ooofo > otohoeo ofoioroeo ooofo ohoeololo.
Received on Thursday, 28 August 2014 03:10:41 UTC