- From: Domenic Denicola via GitHub <sysbot+gh@w3.org>
- Date: Wed, 26 Aug 2015 21:27:59 +0000
- To: public-secondscreen@w3.org
> Reader Loop
There's a couple major ways in which you get benefits:
1. You can call read() at any time in your program, without fear of
losing messages. Whereas, if you attach an onmessage handler too late,
you lose data.
2. The lack of read() call, e.g. if the client is overloaded and
cannot process data immediately, can be used as a backpressure signal,
to stop producing so much data. I'm not sure this is applicable in
the cases you mention, but it's part of the generic framework. (Maybe
it would be useful to let the other side of the presentation know that
its commands are not being processed in a timely fashion? It's
generally applicable for any async processing of commands.)
More relevantly for your question though, you can just do
```js
session.readable.pipeTo(new WritableStream({ write: handleMessage }));
```
as a starting point, with the potential for further customization
(e.g. processing close signals or applying custom backpressure
strategies) by adding more options to the writable stream constructor.
> Chunk types
> How are pipes and chunks typed, i.e. how do I know that a
reader/writer will accept the type of data produced by the other?
A writable stream will usually error if fed an incompatible chunk
type.
> Specifically, the types accepted by the PresentationSession are
chosen to be serializable. How do we limit the readers and writers
similarly?
For the readable side, it's easy: just only produce serializable chunk
types.
For the writable side, you would error if given an incompatible chunk
type.
> Also, how does the reader recover the type of data sent by the
writer?
I don't think I fully understand the question...
> Promise semantics
The semantics of the write() promise are entirely up to the creator of
the writable stream. In general it does not signal a guarantee of
delivery, but it may be useful for communicating immediately-known
errors (e.g., the file handle has been closed). Or you could just have
it always fulfill immediately.
> Must the writer wait until the previous promise has resolved before
sending another chunk?
Nope! It automatically gets queued. You can call write() many times in
quick succession.
> It looks like queueing is part of the definition so writes can be
pipelined. So are there N pending promises for N chunks in the queue?
Yes, although if you decide to implement your writable stream so that
it processes all writes immediately, the queue won't really
materialize.
> Queueing
So, yeah, the idea of streams is to provide an interface that exposes
more directly the backpressure signals and queuing that is already
presumably happening in your implementation. Either automatically, as
happens with pipeTo(), or manually, if the developer does a manual
read() loop or consults the writable stream's backpressure signals. As
such I don't think you'd want to provide another layer---you'd just
more directly expose the layer you already have. I'd be interested in
digging more into your thoughts here, especially as the design of
writable streams is still shaping up.
--
GitHub Notif of comment by domenic
See
https://github.com/w3c/presentation-api/issues/163#issuecomment-135175246
Received on Wednesday, 26 August 2015 21:28:05 UTC