Re: [streams] rsReader.readBatch(numberOfChunks)? rbsReader.read(view, { waitUntilDone: true })? (#320)

> Based on the benchmark, reading these 32 message objects out of MessageParserTransform will take over 7ms on a mobile device. This is 7ms of time added on top of the I/O.

>  In this case the async scheduling required by read() is adding unavoidable latency.

This is a good scenario, which I appreciate you taking the time to outline. However, I think we need to significantly improve our benchmarking techniques if we're going to make this sort of claim. For example, we need to test in a ready-plus-read() scenario instead of simple sync read(). And I want to see something more realistic, like the scenario you outline, where we do I/O for a message and then split it into several objects. And we need to eliminate these scheduler artifacts.

All this talk about GC is recalling that when I ask the VM people about this sort of stuff, they say "generational GC will solve this." Apparently currently promises always outlive the young generation, whereas they probably should not if they're immediately `.then`ed or if the result of `.then` is not used. But, who knows when that can get fixed...

---
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/streams/issues/320#issuecomment-91602706

Received on Friday, 10 April 2015 16:06:43 UTC