W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Patrick McManus <pmcmanus@mozilla.com>
Date: Mon, 18 Feb 2013 09:18:05 -0500
Message-ID: <CAOdDvNrt3DFWjytNG+xumkNR7XjdKuvgnQ6J=c++jmFYyk7wbA@mail.gmail.com>
To: Helge Hess <helge.hess@opengroupware.org>
Cc: (wrong string) ™ˆ™˜Œ) <willchan@chromium.org>, James M Snell <jasnell@gmail.com>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
in the spirit of producing an ID that answers the big questions - I'm
a bit confused at what problem] a mget proposal is trying to solve
beyond what we've got in front of us now.. is it improved compression?

We already have (a few) schemes that do quite well on that count so it
isn't an existential problem that needs solving at any cost - and this
has a high cost associated with it. Frankly, its not clear to me that
the compression it gives would even be competitive with the delta
schemes - I'd like to see its proponents prove that out.

But beyond that it has costs derived from
* added latency to determine resource sets. That's pretty much a
non-starter for me because it conflicts with my core goals for the new
protocol
* reduced flexibility.. right now each resource has its own set of
headers which contain a lot of redundancy. The delta schemes preserve
that property while exploiting the redundancy. The mget scheme
requires all the resources in a set have the same headers, but in
truth small variations exist and are quite useful. Accept headers vary
by media type, Cookies vary, etc.. I'd hope an ID would look at page
loads that use those patterns.
Received on Monday, 18 February 2013 14:18:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 18 February 2013 14:18:38 GMT