[whatwg/streams] Backpressure from tee-ing and slow/pending consumer (#506)

This is likely related to #401, but I don't want to trigger everyone in the thread if it turns out to be false, so here is another issue specifically for my problem:

Does current definition assume infinite or a significantly large internal buffer when you teed the stream? ie. **If a user decide to read just one of the branch to completion, and then read the remaining branch, should the user expect to run into backpressure for significantly large data size?** Or should implementation handle this internally for as much as possible?

I ask this because Node.js stream's internal buffer size seems to smaller than Chrome's buffer size. Which means if the user is using isomorphic `fetch()`, they might run into backpressure on server-side but not in client-side.

Maybe there should be a recommended buffer size or an algorithm to figure out the proper buffer size? Apologize if I miss something obvious.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/streams/issues/506

Received on Thursday, 18 August 2016 08:27:26 UTC