- From: Isaac Schlueter <i@izs.me>
- Date: Fri, 9 Aug 2013 12:47:36 -0700
- To: Jonas Sicking <jonas@sicking.cc>
- Cc: Austin William Wright <aaa@bzfx.net>, Domenic Denicola <domenic@domenicdenicola.com>, Takeshi Yoshino <tyoshino@google.com>, "public-webapps@w3.org" <public-webapps@w3.org>
Jonas, What does *progress* mean here? So, you do something like this: var p = stream.read() to get a promise (of some sort). That read() operation is (if we're talking about TCP or FS) a single operation. There's no "50% of the way done reading" moment that you'd care to tap into. Even from a conceptual point of view, the data is either: a) available (and the promise is now fulfilled) b) not yet available (and the promise is not yet fulfilled) c) known to *never* be available because: i) we've reached the end of the stream (and the promise is fulfilled with some sort of EOF sentinel), or ii) because an error happened (and the promise is broken). So.. where's the "progress"? A single read() operation seems like it ought to be atomic to me, and indeed, the read[2] function either returns some data (a), no data (c-i), raises EWOUDLBLOCK (b), or raises some other error (c-ii). But, whichever of those it does, it does right away. We only get woken up again (via epoll/kqueue/CPIO/etc) once we know that the file descriptor (or HANDLE in windows) is readable again (and thus, it's worthwhile to attempt another read[2] operation). Now, it *might* make sense to say that the entire Stream as a whole is a ProgressPromise of sorts. But, since you often don't know the eventual size of the data ahead of time (and indeed, it will often be unbounded), "progress" is an odd concept in this context. Are you proposing that every step in the TCP dance is somehow exposed on promise returned by read()? That seems rather inconvenient and unnecessary, not to mention difficult to implement, since the TCP stack is typically in kernel space. On Fri, Aug 9, 2013 at 11:45 AM, Jonas Sicking <jonas@sicking.cc> wrote: > On Thu, Aug 8, 2013 at 7:40 PM, Austin William Wright <aaa@bzfx.net> wrote: >> On Thu, Aug 8, 2013 at 2:56 PM, Jonas Sicking <jonas@sicking.cc> wrote: >>> >>> On Thu, Aug 8, 2013 at 6:42 AM, Domenic Denicola >>> <domenic@domenicdenicola.com> wrote: >>> > From: Takeshi Yoshino [mailto:tyoshino@google.com] >>> > >>> >> On Thu, Aug 1, 2013 at 12:54 AM, Domenic Denicola >>> >> <domenic@domenicdenicola.com> wrote: >>> >>> Hey all, I was directed here by Anne helpfully posting to >>> >>> public-script-coord and es-discuss. I would love it if a summary of what >>> >>> proposal is currently under discussion: is it [1]? Or maybe some form of >>> >>> [2]? >>> >>> >>> >>> [1]: https://rawgithub.com/tyoshino/stream/master/streams.html >>> >>> [2]: >>> >>> http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html >>> >> >>> >> I'm drafting [1] based on [2] and summarizing comments on this list in >>> >> order to build up concrete algorithm and get consensus on it. >>> > >>> > Great! Can you explain why this needs to return an >>> > AbortableProgressPromise, instead of simply a Promise? All existing stream >>> > APIs (as prototyped in Node.js and in other environments, such as in >>> > js-git's multi-platform implementation) do not signal progress or allow >>> > aborting at the "during a chunk" level, but instead count on you recording >>> > progress by yourself depending on what you've seen come in so far, and >>> > aborting on your own between chunks. This allows better pipelining and >>> > backpressure down to the network and file descriptor layer, from what I >>> > understand. >>> >>> Can you explain what you mean by "This allows better pipelining and >>> backpressure down to the network and file descriptor layer"? >> >> >> I believe the term is "congestion control" such as the TCP congestion >> control algorithm. That is, don't send data to the application faster than >> it can parse it or pass it off, or otherwise some mechanism to allow the >> application to throttle down the incoming "flow", essential to any networked >> application like the Web. > > I don't think that "congestion control" is affected by progress > notifications at all. And it is certainly not affected by if the > progress notifications fire from the Promise object or from another > object. > > Progress notifications doesn't affect when or how data is being read. > It only tells you about the reads that other APIs are doing. > >> I think there's some confusion as to what the abort() call is going to do >> exactly. > > This is a good question. I.e. does calling abort() on a Promise > returned from Stream.read() only cancel that read, or does it also > cancel the whole Stream? > > I could definitely see that as an argument for returning > ProgressPromise rather than AbortableProgressPormise from > Stream.read() and instead sticking an abort() function on Stream. > > In any case, this seems like an orthogonal issue to progress > notifications being or not being. > > / Jonas
Received on Friday, 9 August 2013 19:48:06 UTC