W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: delta encoding and state management

From: Mark Nottingham <mnot@mnot.net>
Date: Sun, 20 Jan 2013 10:48:36 +1100
Cc: James M Snell <jasnell@gmail.com>, Nico Williams <nico@cryptonector.com>, Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <A983C018-A313-4880-B9FB-4B8AE40FB2A6@mnot.net>
To: RUELLAN Herve <Herve.Ruellan@crf.canon.fr>
Indeed. You can see this in the results for the simple compressor, which just keeps the previous set of headers on the connection as state. 

It's not as efficient as delta or gzip, but the numbers aren't bad (actually, much better than I reported in my blog post, due to a bug in the test runner which is fixed in my refactor branch), and the amount of state (and complexity!) is bounded. 

On 19/01/2013, at 12:50 AM, RUELLAN Herve <Herve.Ruellan@crf.canon.fr> wrote:

> I agree that finding optimized binary encodings for headers will help us reducing the size of the data transmitted.
> At the same time, stateful information is also very useful when transmitting a set of successive messages. It allows encoding a header as a reference to another header present in a previous message.
> In my experiments, I tried to devise a binary encoding for the Accept header. However, I found that I was not able to reach the compression ratio obtained by using references to previous messages. Currently, in a set of requests to get a full web page, the Accept header will take 4 or 5 different values. This allows for the stateful compression to be very efficient.
> The drawback of stateful compression is that this state must be stored. I understand that this can be a critical problem for intermediaries. I think that we should work for minimizing the amount of state an intermediary has to store for each connection. I was also wondering if anyone had some rough figure of what would be acceptable by an intermediary.
> Hervé.
>> -----Original Message-----
>> From: James M Snell [mailto:jasnell@gmail.com]
>> Sent: vendredi 18 janvier 2013 00:32
>> To: Nico Williams
>> Cc: Roberto Peon; ietf-http-wg@w3.org
>> Subject: Re: delta encoding and state management
>> Agreed on all points. At this point I'm turning my attention towards
>> identifying all of the specific headers we can safely and successfully provide
>> optimized binary encodings for. The rest will be left as is. The existing bohe
>> draft defines an encoding structure for the list of headers themselves, I will
>> likely drop that and focus solely on the representation of the header values
>> for now. My goal is to have an updated draft done in time for the upcoming
>> interim meeting.
>> On Thu, Jan 17, 2013 at 2:16 PM, Nico Williams <nico@cryptonector.com>
>> wrote:
>> 	On Thu, Jan 17, 2013 at 3:44 PM, James M Snell <jasnell@gmail.com>
>> wrote:
>> 	> We certainly cannot come up with optimized binary encodings for
>> everything
>> 	> but we can get a good ways down the road optimizing the parts we
>> do know
>> 	> about. We've already seen, for instance, that date headers can be
>> optimized
>> 	> significantly; and the separation of individual cookie crumbs allows
>> us to
>> 	> keep from having to resend the entire cookie whenever just one
>> small part
>> 	> changes. I'm certain there are other optimizations we can make
>> without
>> 	> feeling like we have to find encodings for everything.
>> 	The only way cookie compression can work is by having connection
>> 	state.  But often the whole point of cookies is to not store state on
>> 	the server but on the client.
>> 	The more state we associate with connections the more pressure
>> there
>> 	will be to close connections sooner and then we'll have to establish
>> 	new connections, build new compression state, and then have it torn
>> 	down again.  Fast TCP can help w.r.t. reconnect overhead, but that's
>> 	about it.
>> 	We need to do more than measure compression ratios.  We need to
>> 	measure state size and performance impact on fully-loaded
>> middleboxes.
>> 	 We need to measure the full impact of compression on the user
>> 	experience.  A fabulous compression ratio might nonetheless spell
>> doom
>> 	for the user experience and thence the protocol.  If we take the
>> wrong
>> 	measures we risk failure for the new protocol, and we may not try
>> 	again for a long time.
>> 	Also, with respect to some of those items we cannot encode
>> minimally
>> 	(cookies, URIs, ...): their size is really in the hands of the
>> 	entities that create them -- let *them* worry about compression.
>> That
>> 	might cause some pressure to create shorter, less-meaningful URIs,
>> 	but... we're already there anyways.
>> 	Nico
>> 	--

Mark Nottingham   http://www.mnot.net/
Received on Saturday, 19 January 2013 23:49:06 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:09 UTC