Re: Sending very large chunks over the data channel

On 28 May 2014, at 10:18, Harald Alvestrand <harald@alvestrand.no> wrote:

> On 05/28/2014 10:52 AM, Wolfgang Beck wrote:
>> Adding another data transfer protocol on top of SCTP wll not solve your problem.
>> 
>> The Websocket-style API is the problem.
>> It does not allow the JS to delay reception and does not tell you when it is apporpriate to send more data.
>> 
>> Sending a chunk and wait for an ACK? That means you will spend most of the time waiting for Acks instead of
>> transmitting data. Of course you can somehow negotiate how many chunks you can send without having to
>> wait for an ACK. Now you have re-implemented a substantial part of SCTP, probably with more errors and less sophistication.
>> 
>> What's wrong with the Streams API?
> 
> The first thing wrong about the Streams API as described in the link below is that it does not preserve message boundaries; a Stream is a sequence of bytes.
> 
> Our chosen abstraction is a sequence of messages.
> 
> Something like the Streams API may be a Good Thing (and applicable to websockets too), but the current proposal just has the wrong model for our purposes.
> 
> If you have a suggestion to bridge the gap, please bring it forward.

It might be worth doing some protocol archeology and looking at the filesystem protocol from Plan9 (9p aka styx)
http://en.wikipedia.org/wiki/9P

9P is a simple file system protocol, which we could use for moving large blobs between peers.

As I recall 9p is built to run over IL which provides reliable transmission of sequenced messages, just like SCTP
in the reliable mode so 9p should work pretty well over the data channel. It is simple enough to implement in javascript.

Tim.



> 
>> 
>> Wolfgang
>> 
>> On 05/27/14 09:37, Stefan Håkansson LK wrote:
>>> This was discussed at the f2f, and the Streams API was mentioned, but as
>>> Harald pointed out yesterday the applicability of Streams with the data
>>> channel is not clear yet.
>>> 
>>> But there is another option that is supported right now. The blob
>>> (defined in http://dev.w3.org/2006/webapi/FileAPI/) supports slicing up
>>> data chunks in smaller parts (and of course re-assembling them back).
>>> So, it is quite simple to split up a large chunk in smaller ones, and
>>> then add some simple acking on the app layer (hold back sending next
>>> slice until the previous one is acked).
>>> 
>>> This is not elegant, but should work.
>>> 
>>> The quota API (https://dvcs.w3.org/hg/quota/raw-file/tip/Overview.html)
>>> allows for a bit more sophistication, but it seems to be supported by
>>> Chrome only (and then only an older version of the API).
>>> 
>>> Stefan
>>> 
>> 
>> 
> 
> 

Received on Wednesday, 28 May 2014 09:39:41 UTC