Re: Thoughts and questions on the API from a modular synth point of view

On Fri, Aug 3, 2012 at 9:57 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> 2) When a getUserMedia stream feeds into a paused node, drop data. For
>>> authors who actually want DVR-like buffering we probably should invent an
>>> entirely new kind of MediaStream that specifically does that.
>>>
>>
>> How does it drop data, exactly?  Because that's probably going to sound
>> like a glitch.
>>
>
> Can you describe an example you're concerned about in more detail?
>

Well, the gUM stream may potentially have a very hard-edged transition in
the stream then - because it's picking up mid-stream - while the other data
will not.  As they'll all be (potentially) feeding through a processing
node graph, the resulting data may be a bit confusing (e.g. a gUM and a
bufferSource both feeding into a convolution node, and it is paused -
what's the state data when it's resumed?

My intent was not to say "this is impossible" - just "this makes a lot of
>> things complex, just to make a single scenario very easy - and it's not
>> impossible now."
>>
>
> I don't think there needs to be much API complexity. With the defaults I
> described, I think an additional currentTime attribute on AudioNodes and a
> "paused" boolean attribute on AudioNodes would handle a lot of use-cases.
> The underlying model's a bit more complex but for authors who don't use
> pausing, there is no additional complexity.
>

I wasn't talking about API complexity - I was talking about implementation
complexity.


> Note that some things are impossible now. For example, right now you can't
> have a source with an echo effect applied to it, pause the source+echo, and
> later resume the source+echo in such a way that the echo is paused
> concurrently with the source.


Not true; you can always insert a JSNode and record (buffer) the audio.

-C

Received on Monday, 6 August 2012 20:29:06 UTC