Re: Overlap between StreamReader and FileReader

On Mon, Jul 1, 2013 at 9:03 AM, Takeshi Yoshino <tyoshino@google.com> wrote:
> Moved to github.
> https://github.com/tyoshino/stream/blob/master/streams.html
> http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
>
>> Why would it be neutered if size is not given?
>
> When size is not given, we need to mark it "fully read" by using something
> else. I changed to use read position == -1.

I'm not sure I follow. Isn't the maxSize argument optional so you can
read all the data queued up thus far? It seems that should just work
and not prevent more data queued in the future to be read from the
stream. (Later on in the algorithm it seems this is acknowledged, but
at that point the stream is already neutered.)


>> I think you need to define the stream buffer somewhat more explicitly
>> so that only what you decide to read from the buffer ends up in the
>> ArrayBuffer and newly queued data while that is happening is not.
>
> Do you want FIFO model to be empathized?

It doesn't emphasis, it just needs to be clear.


>> Probably defining Stream conceptually and defining read() (I don't
>> think we should call it readAsArrayBuffer) in terms of those concepts
>
> You mean that something similar to XHR's responseType is preferred?

Do we even need that? It seems just passing ArrayBuffer in and out
could be sufficient for now?


What's "pending read resolvers"?


--
http://annevankesteren.nl/

Received on Monday, 1 July 2013 14:32:01 UTC