W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2009

Re: p1-message-07 S 7.1.4

From: Jamie Lokier <jamie@shareable.org>
Date: Tue, 21 Jul 2009 13:28:34 +0100
To: Mark Nottingham <mnot@mnot.net>
Cc: Adrien de Croy <adrien@qbik.com>, Henrik Nordstrom <henrik@henriknordstrom.net>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20090721122834.GE20756@shareable.org>
Mark Nottingham wrote:
> The result is that it's now common practice to deploy assets on  
> multiple hosts just to avoid this limitation, and JavaScript library  
> developers are starting to look at ways of bundling multiple responses  
> into one, thereby tunnelling through HTTP and making the messages  
> opaque. I'd say both are signs that there needs to be a change.

I agree with raising the connection limit a little.  It's being worked
around already, with multiple domains.

Some of us "long-polling" AJAX folks even use infinite domains, using
wildcard DNS and a per-client-instance unique id, to work around it.

Raising the connection limit will help eventually, but it won't remove
the need for those workarounds for some years, as browsers with the
limit remain in use.

It shouldn't be raised too much, as 50 connections for 50 resources on
a page is not a good thing to encourage.

However, I'd say the really worthwhile change that's long in coming is
to fix pipelining so that it can be actually deployed, and so that
it's actually efficient and reliable when used.

Usable pipelining is likely to be kinder to the network than many
connections, and it's also likely to perform better, even over fast
networks.  I'm not sure how to achieve widely-compatible pipelining,
but I have the impression it's not been seriously attempted.

HTTP over SCTP is effectively this, with the slight disadvantage that
SCTP cannot be expected to work over the whole internet for quite a
long time.  Firewalls, NATs and even operating systems still need to
acquire support for it.  (Maybe HTTP over SCTP over UDP will be more
successful at that, but very difficult for applications to implement
well).

> OTOH, I also think completely removing limitations isn't good practice  
> either, because there are still networks out there where congestion is  
> a problem, and having an app open multiple TCP connections (as many  
> "download accelerators" do) to hog resources isn't good for the long- 
> term health of the Internet either.

I agree.  Opening lots of connections also fails with some routes on
the internet, due to connection-tracking firewalls and NAT routers
with a limit on the number of connections they'll support.  They tend
to drop existing connections when the limit is exceeded.

> My personal preference would be to:
>   - raise the recommended limit to something like 6 or 8 simultaneous  
> connections (I believe there's been some research referenced  
> previously that shows these numbers to be reasonable), and
>   - explain a bit more of the motivation/tradeoffs in the text, and

>   - allow servers to explicitly relax this requirement to clients for  
> particular uses (perhaps in an extension; this has been discussed a  
> bit on the hybi list and elsewhere), and

Yes.  Right now, I use "Connection: close" for slow responses to tell
a client that it should not queue further requests on the same
connection.  It would be better to have "Connection: no-pipeline".

Without it, multiple domain workarounds are still needed.

>   - long-term, look at ways to work around this problem in a better  
> way (e.g., the effort to run HTTP over SCTP).

That's a good one.  But I think there's a lot of scope for improving
HTTP over TCP too, in very similar ways to the way HTTP over SCTP
works.  Particularly in the area of non-blocking multiplexed multiple
requests and responses, which SCTP does naturally and could be done in
several ways over HTTP.  Perhaps they could share some design.

-- Jamie
Received on Tuesday, 21 July 2009 12:45:08 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:08 GMT