W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: delta encoding and state management

From: (wrong string) 陈智昌 <willchan@chromium.org>
Date: Tue, 22 Jan 2013 12:33:37 -0800
Message-ID: <CAA4WUYhg2qt_z_TrOAH0ax6mUpYPNeG4x740CgQi5Voq=50K_Q@mail.gmail.com>
To: James M Snell <jasnell@gmail.com>
Cc: Nico Williams <nico@cryptonector.com>, Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
I think all this optimized encoding work is good for all the
aforementioned reasons. I also hear that people want to reduce
connection state. That makes sense. I know that Roberto also said that
he doesn't believe encoding is sufficient, and I agree. The question
is, at what point do the wins of stateful compression outweigh the
costs?

>From the SPDY whitepaper
(http://www.chromium.org/spdy/spdy-whitepaper), we note that:
"Header compression resulted in an ~88% reduction in the size of
request headers and an ~85% reduction in the size of response headers.
On the lower-bandwidth DSL link, in which the upload link is only 375
Kbps, request header compression in particular, led to significant
page load time improvements for certain sites (i.e. those that issued
large number of resource requests). We found a reduction of 45 - 1142
ms in page load time simply due to header compression."

That result was using gzip compression, but I don't really think
there's a huge difference in PLT between stateful compression
algorithms. That you use stateful compression at all is the biggest
win, since as Mark already noted, big chunks of the headers are
repeated opaque blobs. And I think the wins will only be greater in
bandwidth constrained devices like mobile. I think this brings us back
to the question, at what point do the wins of stateful compression
outweigh the costs? Are implementers satisfied with the rough order of
costs of stateful compression of algorithms like the delta encoding or
simple compression?

On Thu, Jan 17, 2013 at 3:32 PM, James M Snell <jasnell@gmail.com> wrote:
> Agreed on all points. At this point I'm turning my attention towards
> identifying all of the specific headers we can safely and successfully
> provide optimized binary encodings for. The rest will be left as is. The
> existing bohe draft defines an encoding structure for the list of headers
> themselves, I will likely drop that and focus solely on the representation
> of the header values for now. My goal is to have an updated draft done in
> time for the upcoming interim meeting.
>
>
> On Thu, Jan 17, 2013 at 2:16 PM, Nico Williams <nico@cryptonector.com>
> wrote:
>>
>> On Thu, Jan 17, 2013 at 3:44 PM, James M Snell <jasnell@gmail.com> wrote:
>> > We certainly cannot come up with optimized binary encodings for
>> > everything
>> > but we can get a good ways down the road optimizing the parts we do know
>> > about. We've already seen, for instance, that date headers can be
>> > optimized
>> > significantly; and the separation of individual cookie crumbs allows us
>> > to
>> > keep from having to resend the entire cookie whenever just one small
>> > part
>> > changes. I'm certain there are other optimizations we can make without
>> > feeling like we have to find encodings for everything.
>>
>> The only way cookie compression can work is by having connection
>> state.  But often the whole point of cookies is to not store state on
>> the server but on the client.
>>
>> The more state we associate with connections the more pressure there
>> will be to close connections sooner and then we'll have to establish
>> new connections, build new compression state, and then have it torn
>> down again.  Fast TCP can help w.r.t. reconnect overhead, but that's
>> about it.
>>
>> We need to do more than measure compression ratios.  We need to
>> measure state size and performance impact on fully-loaded middleboxes.
>>  We need to measure the full impact of compression on the user
>> experience.  A fabulous compression ratio might nonetheless spell doom
>> for the user experience and thence the protocol.  If we take the wrong
>> measures we risk failure for the new protocol, and we may not try
>> again for a long time.
>>
>> Also, with respect to some of those items we cannot encode minimally
>> (cookies, URIs, ...): their size is really in the hands of the
>> entities that create them -- let *them* worry about compression.  That
>> might cause some pressure to create shorter, less-meaningful URIs,
>> but... we're already there anyways.

Unless I missed something, this is not new pressure, right? I don't
think it's worked well so far, as evidenced by the current situation
(the existing sizes of headers as noted by the SPDY whitepaper)...but
perhaps you have evidence to the contrary?

>>
>> Nico
>> --
>
>
Received on Tuesday, 22 January 2013 20:34:05 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 22 January 2013 20:34:09 GMT