- From: Noah Mendelsohn <nrm@arcanedomain.com>
- Date: Thu, 08 Aug 2013 08:12:37 -0400
- To: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
- CC: robert@ocallahan.org, Jer Noble <jer.noble@apple.com>, "K. Gadd" <kg@luminance.org>, Chris Wilson <cwilso@google.com>, Marcus Geelnard <mage@opera.com>, Alex Russell <slightlyoff@google.com>, Anne van Kesteren <annevk@annevk.nl>, Olivier Thereaux <Olivier.Thereaux@bbc.co.uk>, "public-audio@w3.org" <public-audio@w3.org>, "www-tag@w3.org List" <www-tag@w3.org>
On 8/8/2013 12:04 AM, Srikumar Karaikudi Subramanian wrote: > You're right that I'm not measuring what you suggested we measure. I read your above paragraph to mean that you intended to measure something close to the*upper* bound on copying performance. It's been suggested elsewhere that we not pursue this too much on this thread, but yes exactly: I'm proposing that one measure the bound on performance because that directly gives you the minimum overhead that the copies will introduce. If that's unacceptable, then you can stop right there. If context switching is causing problems then you'll want to try to process data in large chunks per switch, regardless of whether copying is also acceptable or not. Anyway, I suggest we put this down. I had raise the issue, but elsewhere it's claimed that for the audio case in particular the analysis has already been done sufficiently well. Assuming that's true, I don't want to tie up this thread. Noah
Received on Thursday, 8 August 2013 12:12:59 UTC