Re: Overlap between StreamReader and FileReader

On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola
<domenic@domenicdenicola.com> wrote:
> From: Takeshi Yoshino [mailto:tyoshino@google.com]
>
>> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola <domenic@domenicdenicola.com> wrote:
>>> Hey all, I was directed here by Anne helpfully posting to public-script-coord and es-discuss. I would love it if a summary of what proposal is currently under discussion: is it [1]? Or maybe some form of [2]?
>>>
>>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html
>>> [2]: http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
>>
>> I'm drafting [1] based on [2] and summarizing comments on this list in order to build up concrete algorithm and get consensus on it.
>
> Great! Can you explain why this needs to return an AbortableProgressPromise, instead of simply a Promise? All existing stream APIs (as prototyped in Node.js and in other environments, such as in js-git's multi-platform implementation) do not signal progress or allow aborting at the "during a chunk" level, but instead count on you recording progress by yourself depending on what you've seen come in so far, and aborting on your own between chunks. This allows better pipelining and backpressure down to the network and file descriptor layer, from what I understand.

Can you explain what you mean by "This allows better pipelining and
backpressure down to the network and file descriptor layer"?

I definitely agree that we don't want to cause too large performance
overheads. But it's not obvious to me how performance is affected by
putting progress and/or aborting functionality on the returned Promise
instance, rather than on a separate object (which you suggested in
another thread).

We should absolutely learn from Node.js and other environments. Do you
have any pointers to discussions about why they didn't end up with
progress in their "read a chunk" API?

/ Jonas

Received on Thursday, 8 August 2013 21:57:05 UTC