W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Willy Tarreau <w@1wt.eu>
Date: Mon, 18 Feb 2013 08:06:35 +0100
To: "William Chan (?????????)" <willchan@chromium.org>
Cc: Helge Hess <helge.hess@opengroupware.org>, James M Snell <jasnell@gmail.com>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
Message-ID: <20130218070635.GH13100@1wt.eu>
Hi William,

On Sun, Feb 17, 2013 at 07:00:20PM -0800, William Chan (?????????) wrote:
> > Yes, you might want to wait n (3?) milliseconds before sending out
> > additional requests and batch what you get within that timeframe. You don't
> > really send out requests in realtime while parsing, do you? ;-)
> >
> 
> If you had read the previous email thread I linked to at the very
> beginning, you would realize that contrary to Willy's expectation, I
> demonstrated that we do indeed send out requests ASAP (putting aside some
> very low-latency batching). We disable Nagle in order to prevent kernel
> level delays in this manner, since we do indeed want to get requests out
> ASAP.

As you told me it was not possible to guess URIs the way HTML is parsed
nowadays, I realized that for the protocol to succeed, we must make it
easy to implement on all sides (browsers, intermediaries, servers). We
must not make the protocol optimal for situations that do not exist or
which forces any party to do complex or undesired things (such as waiting
a few ms). Anyway, nobody must ever sleep for some time, everything must
always be event-driven because there is no way to recover for lost time.

Having thought about this for a while, I think we must keep in mind that
the enemy is RTT and we want to avoid stacking them. Thus I think that
a reasonable solution would be to set a limit on the number of concurrent
connections or streams to a given server, and to batch requests only when
there are too many unreplied ones.

This means that when DNS+RTT is shorter than the HTML parsing time, requests
leave one at a time. When DNS+RTT is higher than the parsing time, requests
leave in batches.

Also, the behaviour you explained to me means that requests should probably
be merged much like a pipeline method and less like a pack of requests. By
this, I mean that we should keep the opportunity to add requests very late
at low cost (eg: append something to the packet). For this reason, I don't
think the MGET method would be the most suited for the task because as you
said, you don't know that in advance. And if we're going to always prepare
MGET requests, then they will be a de-facto replacement for GET even for
single objects, which means we failed somewhere. One solution to this could
be to have structured requests ordered approximately like this :

     1) length from 1 to 6
     2) Host
     3) METH
     4) URI for req 1
     5) Header fields
     6) Data
     7) *(length + URI)
     8) END

That way requests may be aggregated until the send() is performed. And we
may even go further with better frame encoding if we allow a request to
reuse headers from the previous one, because then even if you did the send()
segments may still be merged in kernel buffers. This is what pipelining does
in 1.1 (except that pipelining offers no provisions for reusing elements from
the previous request).

And after all, that's also how objects are linked in the HTML document :
you discover requests in the middle of the stream until you reach the </HTML>
tag. Above it would be the same for the server : it would process all requests
with the same header fields until it sees the END tag.

For intermediaries and servers, this would be almost stateless. I'm saying
"almost" because reusing data means a state is needed, but this state lasts
only till the end of the requests batch, so in practice it's just like
tomorrow where we have to keep some request information while processing it
(at least for logging).

Regards,
Willy
Received on Monday, 18 February 2013 07:07:19 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 18 February 2013 07:07:26 GMT