W3C home > Mailing lists > Public > public-webrtc@w3.org > June 2018

Re: To Stream or not to Stream

From: Peter Thatcher <pthatcher@google.com>
Date: Fri, 15 Jun 2018 14:02:43 -0700
Message-ID: <CAJrXDUEc9FU245vZdbbV=KQ0eANLhzG=NrV8g5rTq1Z693d7Og@mail.gmail.com>
To: Bernard Aboba <Bernard.Aboba@microsoft.com>
Cc: youenn fablet <yfablet@apple.com>, Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
On Fri, Jun 15, 2018 at 1:37 AM Bernard Aboba <Bernard.Aboba@microsoft.com>

> Youenn said:
> "I agree with the use cases and the idea to move away from SDP.
> It is unclear yet how that would translate into APIs, probably lower level
> than WebRTC current APIs for some of the use-cases."
> [BA] An important step toward understanding the low-level/high-level issue
> is to think critically about the use cases.
> One of the reasons to go into the use cases in detail is to enable
> participants to imagine themselves arguing with management for the
> resources necessary to implement the APIs that enable those use cases.
> If when imagining that situation there is a lack of confidence in whether
> the case is convincing (for some company you can imagine yourself working
> for), then that is an indication that the use case has not cleared the bar.
> In practice, that bar is pretty high at most companies - functionality is
> not free, particularly with today's increasing emphasis on reliability,
> security, privacy and performance.
> So unless a use case can enable a quantum leap in the user experience (for
> improvements) or an entirely new class of applications (for new use cases),
> the argument for resources may not go well.
> Youenn also said:
> "I do not think we reached consensus on the idea of splitting
> senders/receivers in smaller bricks.
> There are some use cases that would benefit from this.
> There has been concerns in the cost, complexity and feasibility of this
> approach.
> We should also investigate alternatives to fulfil these use cases than
> going the splitting way.""
> [BA] The Funny Hats use case requires access to raw media, but as far as I
> can tell it does *not* require splitting the sender and receiver into
> smaller bricks, nor does it require codecs implemented in Javascript.
> I'm not clear that the entertainment/sports use cases requires splitting
> or JS codecs either, since that use case could potentially be handled by a
> QuicSender/QuicReceiver.

But then we'd have multiple objects with encoders embedded into them, and
we'd have to specify the mapping of media to QUIC streams.  I'm not too
excited about either one of those.

> There is also the basic question of whether the entertainment/sports use
> case is compelling at all.
> IMHO, that argument rests on the desirability of convergence of the
> technology used for streaming and real-time communications.
> Since streaming media is typically transported over HTTP, and HTTP/2 over
> QUIC is likely to be widely deployed, there is an argument that HTTP/2 over
> QUIC will eventually be widely used in streaming such as HLS.
> If media over QUIC is likely to be popular in streaming, the argument is
> that there would be engineering benefit to converging streaming and RTC by
> supporting RTC over QUIC as well.
> I have little experience in streaming media, so I cannot evaluate that
> claim, but it would be good to hear from people who understand that
> business and whether the benefits would be sufficient to motivate
> the technical work.

 The place where convergence is more likely to be important is with live
broadcasts since that's closer to needing "real-time".
Received on Friday, 15 June 2018 21:03:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:18:42 UTC