W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2013

Re: [tsvwg] The List (of application-layer desired features)

From: Michael Tuexen <Michael.Tuexen@lurchi.franken.de>
Date: Wed, 4 Sep 2013 23:58:02 +0200
Cc: Yuchung Cheng <ycheng@google.com>, Joe Touch <touch@isi.edu>, HTTP Working Group <ietf-http-wg@w3.org>, tsvwg <tsvwg@ietf.org>
Message-Id: <10D1B35A-0CFA-49BD-A13B-2E0A0F3D8B22@lurchi.franken.de>
To: Roberto Peon <grmocg@gmail.com>
On Sep 4, 2013, at 9:43 PM, Roberto Peon <grmocg@gmail.com> wrote:

> I suspect that Yuchung meant 'widely deployed and available', when he says 'always'. That should certainly be true for TCP.
That is definitely true for TCP.
However, why do you need an alternate solution to be "widely deployed". Can't it
be deployed within the browser?
For me it makes sense to require "availability" and "having a sufficient level of connectivity".
> 
> Personally, I think that UDP encapsulated stuff has a fighting chance. Deployment is a matter of addressing any new features and protocol technical issues of the current non end-to-end internet, and sticking a binary on participating machines.
> This is a reasonably bounded problem, thus, has a fighting chance.
> 
> I suspect that, without kernel support for interpacket delay measurement and interpacket gap enforcement, any such protocols will do worse than they would with such support, but hopefully that is about enhancement and not required to make something new work well.
This sounds to me like a performance optimisation. I guess this might be one feature
of a larger set of features which improve the overall performance. However, the
connectivity must be there at all. My expectation is that anything running over
UDP might not have the same level of connectivity, but it should be acceptable.

Best regards
Michael
> 
> -=R
> 
> 
> On Wed, Sep 4, 2013 at 11:40 AM, Michael Tuexen <Michael.Tuexen@lurchi.franken.de> wrote:
> 
> On Sep 4, 2013, at 7:52 PM, Yuchung Cheng <ycheng@google.com> wrote:
> 
>> On Wed, Sep 4, 2013 at 10:12 AM, Joe Touch <touch@isi.edu> wrote:
>>> 
>>> 
>>> On 9/4/2013 8:21 AM, Yuchung Cheng wrote:
>>> ...
>>> 
>>>> Here is a problem I don't think there is a good practical solution:
>>>> multi-flows. Currently browser uses some heuristics to determine
>>>> number of parallel connections to trade-off latency and congestion,
>>>> because the transport does not provide a good service for that.
>>> 
>>> 
>>> Transports don't read minds.
>>> 
>>> 
>>>> HTTP/2 reduces one factor by limiting #connections per host to 1 but
>>>> that's
>>>> not enough.
>>> 
>>> 
>>> That's not an appropriate solution - and it's the sort of "mis-use" I was
>>> referring to. It only serves to push muxing up the stack.
>> The transport(s) that can keep muxing down the stack don't always run
>> on the Internet. This is what Roberto's argument is about.
> I think "running always" is hard... Not sure if TCP does.
> So what about UDP encapsulated stuff like SCTP/UDP?
> 
> Best regards
> Michael
>> 
>>> 
>>> 
>>>> IMHO the transport (tcp, sctp, quic, or anything you
>>>> prefer) should just take connection priorities dynamically from the
>>>> app, and schedule connections more intelligently at the receiver. It's
>>>> not the app's job or can he do a good job at higher layer.
>>>> 
>>>> There is an old work called congestion manager but it's not useful b/c
>>>> it's sender based.
>>> 
>>> 
>>> RFC2140 avoids sets of connections from both getting more than their
>>> steady-state fair-share, and reduces the amount they step on each other.
>>> It's already deployed, but might benefit from some app-layer hints.
>>> 
>>> IMO, this isn't a "transport" problem, though - it's more like a missing
>>> coordination layer (whether implemented with headers and state or just an
>>> API to the OS).
>> Any name is fine with me as long as the solution works.
>> 
>>> 
>>> Joe
>> 
>> 
> 
> 
Received on Wednesday, 4 September 2013 21:59:37 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:15 UTC