W3C home > Mailing lists > Public > public-media-capture@w3.org > October 2012

RE: approaches to recording

From: Mandyam, Giridhar <mandyam@quicinc.com>
Date: Wed, 10 Oct 2012 23:23:29 +0000
To: "robert@ocallahan.org" <robert@ocallahan.org>, Jim Barnett <Jim.Barnett@genesyslab.com>
CC: "public-media-capture@w3.org" <public-media-capture@w3.org>
Message-ID: <CAC8DBE4E9704C41BCB290C2F3CC921A162FBD00@nasanexd01h.na.qualcomm.com>

From: rocallahan@gmail.com [mailto:rocallahan@gmail.com] On Behalf Of Robert O'Callahan
Sent: Wednesday, October 10, 2012 2:51 PM
To: Jim Barnett
Cc: public-media-capture@w3.org
Subject: Re: approaches to recording

On Thu, Oct 11, 2012 at 4:28 AM, Jim Barnett <Jim.Barnett@genesyslab.com<mailto:Jim.Barnett@genesyslab.com>> wrote:
The upshot of yesterday’s discussion is that there is interest in two different approaches to recording, so I’d like  to start a discussion of them.  If we can reach consensus on one of them, we can start to write things up in more detail.

There could  also be a configuration item specifying whether multiple audio tracks were to be merged or recorded separately.    Most of these options should  probably be provided as a Dictionary at construction time since we would not want them changed while recording was going on.

>We might want to do this in a more general way, such as a MediaStream constructor that mixes together the audio tracks of an incoming >MediaStream, because this functionality would be useful for other MediaStream consumers.
Why can’t this be done via WebAudio?  I thought there was already a decision (at least by the WebRTC WG) to leverage WebAudio for mixing (see http://www.w3.org/2011/04/webrtc/wiki/Santa_Clara_F2F_Summary#Audio_WG).
Received on Wednesday, 10 October 2012 23:24:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:12 UTC