W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: delta encoding and state management

From: Benjamin Carlyle <benjamincarlyle@soundadvice.id.au>
Date: Thu, 24 Jan 2013 14:46:00 +1000
Message-ID: <CAN2g+6buyg2moxvdpXcDam3Ay5MAuWooz52a+thgk65t=PwF3g@mail.gmail.com>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc: Patrick McManus <mcmanus@ducksong.com>, Willy Tarreau <w@1wt.eu>, "William Chan (?????????)" <willchan@chromium.org>, James M Snell <jasnell@gmail.com>, Nico Williams <nico@cryptonector.com>, Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
I prefer the idea of bulk messages over stateful compression, but I'm not
sure either is practical or necessary.

Stateful compression adds overhead for clients, server, and intermediaries.
It also messes with my wireshark. When I start listening mid-way through a
conversation between a client and server I cannot tell what messages are
actually being sent because the redundant headers have all been stripped
out. Unfortunate, and a practical problem I think. It makes the protocol
more difficult to work with and as bandwidth increases and RAM increases
the problem stateful compression is designed to solve goes away but the
difficulty in working with the protocol stays forever. I would rather a
workable protocol than an efficient one, especially when the efficiency
improvement is an incremental improvement over a protocol that has worked
well enough for an Internet that was more bandwidth constrained than
today's internet and will be more bandwidth-constrained than tomorrow's
Internet. If a stateful protocol is to be used it seems like you quickly
start to want p-frames or the like to enable debugging which will undo
performance improvements. I may be in an minority here.

Between the stateful compression and bulk request scenarios I would choose
the latter. Now you can argue straight up that stateful compression will
reduce bandwidth usage in more cases because it doesn't require the client
to have more than one request in its queue at any given time. It will work
even with requests spaced seconds or more apart and it will save bytes and
may also some latency. Client-side compression into a bulk request only
works when there is more than one request in the queue. However, that I
would argue is exactly when you want the compression. If there is plenty of
bandwith - enough that you only have one request in the queue at a time,
how valuable is the stateful compression? Instead, you could choose to
perform no compression until multiple requests are in the queue - that's
the signal that you are bandwidth constrained and would benefit from
compression. However if the latter is a rare case in the developed world
then I don't really see browser developers implementing the latter as they
can avoid the extra effort and just send the requests in sequence as they
always do. Unless a real use case exists for it it will become
a potentially buggy and only occasionally used feature. At least with the
latter case whether the bulk request is implemented or not my wireshark
capture started 60 seconds into the HTTP connection is able to see what
request are being made and what responses are being generated. At least
with the latter proposal the protocol does not become more difficult to
work with in a way that tooling cannot overcome.

What I want from HTTP is not to make better use of bandwidth, but to make
better use of a single TCP connection. I want to be able to send multiple
requests rapidly down the connection. I want to be able to receive
responses to those requests in whatever order they happen to be generated
by the server, and I want some degree of fairness if a big resource is
being downloaded while smaller responses are also ready to be sent. At
a minimum it should be possible to stream a whole bunch of requests across
while occasionally sending a heartbeat request message and getting a timely
heartbeat response message in some reasonable time. This would eliminate
most problems HTTP has in off-Web system integration scenarios and allow
alternative protocols to be immediately or eventually retired.

I know that's not the main target for this protocol and I appreciate the
issues around minimising latency while loading Web pages, but the less you
can break debuggability of the protocol while ensuring efficient use of a
TCP connection the better in my book.

On 23 January 2013 23:19, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:

> Content-Type: text/plain; charset=ISO-8859-1
> --------
> In message <
> CAOdDvNoOnscRCA54n07Suxe9UQieq32SkwvMxNnEdSnK94s_PA@mail.gmail.com>
> , Patrick McManus writes:
> >> As I said, I think that if the state itself is never larger than a
> request
> >> and substitutes for the request, it's not that big of a deal.
> >
> >honestly, the trend in ram prices [...]
> I think both of your perspectives are too near-sighted here.
> The protocol you should be working on should be the one which
> still works when most middle-class homes, not only in the western
> world, but also in India and China, have fibre to the home at
> speeds of 1Gbit/sec and above.
> In that world, a major piece of global news, be it a naked breast,
> an geophysical event or a shot politician, is going to make the
> traffic spikes we have seen until now look tame.
> HTTP is a very assymetric usage protocol, and therefore any amount
> of state that the server _has to_ retain for a client must justify
> it's existence, byte for byte, against the scenario where 10% of
> the world want to access the same URL.
> HTTP/1 allows you to deliver content without committing any per-client
> state, beyond the TCP socket, and that is not a "degraded mode",
> that is the default mode.
> If your HTTP/2 proposal cannot do that, you're working on the wrong
> protocol.
> Poul-Henning
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk@FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
Received on Thursday, 24 January 2013 04:46:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:09 UTC