W3C home > Mailing lists > Public > public-webrtc@w3.org > June 2018

To Stream or not to Stream

From: Harald Alvestrand <harald@alvestrand.no>
Date: Thu, 14 Jun 2018 13:55:26 +0200
To: "public-webrtc@w3.org" <public-webrtc@w3.org>
Message-ID: <be668e3f-a43d-70cc-7cdc-18ae122d393f@alvestrand.no>
Part of my frustration with the streams discussion is that the people
saying "use Streams" haven't been able to tell me exactly what they mean
when they say that.

Part of it is my lack of understanding - until a month or two ago, I
thought streams were still byte-streams only, but now it seems that they
have finally gotten around to passing objects between them, and with the
advent of the TransformStream, there's explicit acknowledgement that
processing using a stream model can cause different things to come out
than what comes in.

But when Sergio says something like this:

> Using the whatwg-like api, it could be possible to do
> source.pipeThrough(funnyHatsWebWorker)
>             .pipeTo(encoder)
>             .pipeThrough(rtpPacketizer)
>             .pipeTo(rtpSender)
>             .pipeTo(rtpTransport) 

I don't know what I'm seeing, and I have dozens of questions that I
don't know where to go to answer.

Back in the Dawn of Time, we had two possible models of how we wired
things together: Explicit links (like MediaStream{track}) or implicit
links (like source-to-sink connections in WebAudio). We chose the
explicit-link model, and made the links into control surfaces, with
functions like ApplyConstraints.

Now, with Streams, I'm not sure if I'm looking at source-to-sink
couplings (where all the controls are on the sources and the sinks) or
explicit-link objects (where there are controls on the connections). So
before I can understand that, I need a proposal in front of me that
actually calls out these things - and so far, none of the comments I've
seen from people who claim to like streams have contained enough
information for me to build one.

In the seemingly simple example above, I can assume that each object
that is mentioned in "pipeThrough()" implements the TransformStream
interface, which consists (effectively) of getting a WritableStream and
a ReadableStream. (But the inline .pipeTo confuses me, since .pipeTo
seems to return a promise that resolves when the stream terminates -
should they have been .pipeThrough also?)

So there's backpressure travelling up the chain - how is this handled?
Just using "available buffer size", which is what
WritableStreamDefaultWriterGetDesiredSize seems to be describing in the
spec, isn't appropriate for video, because we want the rate of the
encoder (4 steps back the chain) to be adjusted to a lower number, not
just doing a "stop/go" signal. We could imagine lots of solutions,
including having the encoder take the transport as a parameter so that
it knows what it's encoding for - but if intermediate steps of the chain
take actions that invalidate the assumptions (like throwing away frames)
- what happens?

I would like to see a proposal for using streams. But:

a) I know I haven't seen one

b) like Peter, I think we can make a lot of decisions without answering
this one

c) I don't know how to make one.

That's the trouble I have with Streams at the moment.

Surveillance is pervasive. Go Dark.
Received on Thursday, 14 June 2018 11:55:54 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:42 UTC