Re: QUIC use cases

On 1/15/2018 9:13 AM, Lennart Grahl wrote:
> On 15.01.2018 06:03, Bernard Aboba wrote:
>> 1. Competition between SCTP data channels and audio/video, due to building of queues.  Currently,
>> data channel implementations use the default SCTP congestion control  (loss-based, defined in RFC 4960),
>> and don't expose a way of selecting an alternative algorithm.  One developer I talked to expressed an
>> interest in being able to use an algorithm like LEDBAT for a background file transfer, so I'm not clear
>> that a new cc algorithm needs to be standardized in IETF.
> I'm passing this on to Michael: Would it be feasible to do this with the
> existing API of usrsctp?

This was expected - we had a number of discussions about how we could 
split up bandwidth between media and data; limiting data rates to 
something proportional to the media rates (to avoid data overwhelming 
media).  There are SCTP congestion algorithms (not necessarily in 
usrsctp today) that are delay-based, per my memory of some of the early 
proposals I made.
One issue would be what the relative priority/split "should" be, but the 
browser can certainly make a default decision.  The assumption was that 
we'd do "something" here; either a forced split or a compatible CC 
algorithm, or some variation on the shared-CC proposals.

If you don't want to give (say) 1/2 the bandwidth to data during a 
transfer, we'd need an API point anyways (QUIC or no QUIC), though maybe 
you could add more semantics onto priorities I suppose.

>> 2. Complexity of doing large file transfers on top of RTCDataChannel message implementations.
>> You've mentioned a number of the issues with this, and most of them seem solvable by fixing issues
>> in the existing specification, and perhaps adding some tests to make sure that implementations conform
>> to the new guidance.
> Indeed, I believe one of the most pressing issues is the impact of
> userland fragmentation/reassembly.

Yeah... the original plan was for WebSocket-sized transfers for files, 
with fragmentation/reassembly in the DC impl (PPID-fragmentation).  
Application-level chunking forces a memory spike (and spike in 
processing time) when combining data for delivery or writing to disk.  
Direct large-Blob transfer allows/allowed the impl to append the data to 
a temporary disk file as it was received, if it wanted to, or move it 
there after some threshold.

Note this isn't a protocol issue really; it's an API issue.  The 
mechanism that the bytes use to cross the wire don't matter for this.

-- 
Randell Jesup -- rjesup a t mozilla d o t com
Please please please don't email randell-ietf@jesup.org!  Way too much spam

Received on Monday, 15 January 2018 19:52:47 UTC