Re: [presentation-api] PresentationSession should have stream interfaces!

Hi @domenic, thanks for taking the time to look at our spec and 
analyze it in relation to Streams.  I recall we looked at it some 
while ago but that spec seemed to be in the formative stages, it looks
 like a lot of progress has been made!

The main benefit to adopting Streams is interoperability with pipes, 
which could greatly simplify common actions like reading the content 
of a URL like a photo and sending it to the presentation.

This is my first time diving into the current Streams document and I 
came up with several questions after reading it and your proposal.  I 
hope you bear with me and can shed some light.

*Reader Loop*
The common use case for either side is to read messages in a loop and 
dispatch them to various application logic (i.e., advancing a slide, 
taking a turn in a game).  With a reader each developer would have to 
write their own event loop:

```
while (true) { 
session.readable().getreader().read().then(handleMessage) }
```

How is this different/better than `session.onmessage = handleMessage`?

*Chunk types*
How are pipes and chunks typed, i.e. how do I know that a 
reader/writer will accept the type of data produced by the other?

Specifically, the types accepted by the PresentationSession are chosen
 to be serializable.  How do we limit the readers and writers 
similarly?

Also, how does the reader recover the type of data sent by the writer?

*Promise semantics*
When does the Promise returned by writer.write() resolve and what 
guarantees are made by resolution?  The actual message delivery to the
 display may be done by a component that is far removed from the 
content, and the sending UA may not be able to guarantee that the 
message was actually received by the other UA.

Must the writer wait until the previous promise has resolved before 
sending another chunk?  It looks like queueing is part of the 
definition so writes can be pipelined.  So are there N pending 
promises for N chunks in the queue?

*Queueing*
I worry a bit about adding another layer of queueing under the control
 of the developer.  The implementation in Chromium already has layers 
of queuing to manage backpressure between the processes that implement
 the Presentation API.  The more queues in use, the higher latency 
which will affect applications like gaming that need low latency 
communciation.  Perhaps we can use the backpressure parameter as a 
signal to manage queue lengths throughout the system.




-- 
GitHub Notif of comment by mfoltzgoogle
See 
https://github.com/w3c/presentation-api/issues/163#issuecomment-134781050

Received on Wednesday, 26 August 2015 00:56:09 UTC