Re: Proposal for fixing race conditions

> Your hypothetical test case merely demonstrates the difference; my point is that it is silly to optimize for imaginary edge cases at the cost of real-world use cases where developers will get unexpected results due to leaving race conditions in this API. I should also note that it has come up in past discussions that we could always introduce new no-copy APIs that don't contain races, if the cost of memcpy is so severe.

It is not inconceivable to make an audio editor which plays an audio file from a specific sample onwards by assigning the buffer to an AudioBufferSourceNode and using start(t,offset,duration) ... possibly followed by effects. Large files (even 5mins?) would be unusable with such an editor if a copy were involved and clients/devs will be forced to do crazy optimizations just to get it to work. Now shift that situation to an iPad with limited memory and it can get worse. DAWs are a use case for the API.

With Jer's example code, it would be possible to simulate such a (reasonable) case.

What might, I think, be acceptable is a one-time copy provided the copy can be reused without additional cost. As far as I can see, immutable data structures are the best candidates to solve the race conditions.

That said, I do find the argument (I think Rogers') that the worst thing that can happen with these race conditions is unexpected audio output and hence they are not very important an interesting stand.

-Kumar

On 17 Jul, 2013, at 7:13 AM, "K. Gadd" <kg@luminance.org> wrote:

> Of course you can claim hypothetical performance benefits from any particular optimization, my point is that in this case we're considering whether or not to leave *race conditions* in a new Web API because we think it might make it faster. We *think* it *might*. Making that sort of sacrifice in favor of 'performance' without doing any reproducible, remotely scientific testing to see whether it's actually faster, let alone fast enough to justify the consequences, seems rash to me.
> 
> It should be quite easy to test the performance benefits of the racy version of the API, as based on my understanding the Firefox implementation currently makes copies. You need only run your test cases in Firefox with SPS and see how much time is spent making calls to memcpy to get a rough picture of the actual overhead. And once you know that, you can look at how your test cases actually perform and see if the cost of that memcpy makes it impossible to ship an implementation that makes those copies.
> 
> I am literally unable to imagine a use case where the cost of the copies would add up to the point where it would remotely be considered a bottleneck. It is the case that the copies probably have to be synchronous, so I could see this hurting the ability to trigger tons and tons of sounds in a single 'frame' from JS, or set tons and tons of curves, etc. But still, memcpy isn't that slow, especially for small numbers of bytes.
> 
> Your hypothetical test case merely demonstrates the difference; my point is that it is silly to optimize for imaginary edge cases at the cost of real-world use cases where developers will get unexpected results due to leaving race conditions in this API. I should also note that it has come up in past discussions that we could always introduce new no-copy APIs that don't contain races, if the cost of memcpy is so severe.
> 
> 
> On Tue, Jul 16, 2013 at 6:27 PM, Jer Noble <jer.noble@apple.com> wrote:
> 
> On Jul 16, 2013, at 1:18 PM, K. Gadd <kg@luminance.org> wrote:
> 
>> This claim has been made dozens of times now on the list and I've seen multiple requests for even a single test case that demonstrates the performance impact. Is there one? I haven't seen one, nor a comment to the effect that one exists, or an explanation of why there isn't one.
> 
> Isn't this self-evident?  Any solution which involves additional memcopy calls during the normal use of the API will have an inherant and known performance cost at the point of the memcopy.  Additionally, there is the ongoing performance cost of having duplicate, in-memory copies of audio data, as well as the additional GC cost of those extra copies.
> 
> That said, it would be very easy to demonstrate: in the hypothetical test case, create a new ArrayBuffer from source data before passing it into the API.  I.e.,
> 
>> sourceNode.buffer = buffer
> 
> becomes:
> 
>> sourceNode.buffer = buffer.slice(0)
> 
> -Jer
> 

Received on Wednesday, 17 July 2013 02:25:12 UTC