Re: Multi-GET, extreme compression?

On Mon, Feb 18, 2013 at 1:21 PM, Phillip Hallam-Baker <hallam@gmail.com> wrote:
>
> On Mon, Feb 18, 2013 at 9:18 AM, Patrick McManus <pmcmanus@mozilla.com>
> wrote:
[...]
> I do not understand your argument here. Performance of MGET should be pretty
> much identical to delta encoding because it is essentially the same thing
> but without the complexity.

In some ways, MGET actually seems more complex.  Here's my rationale:

To avoid head-of-line blocking, a server or intermediary will need to
be able to return the requested resources out of order.  Consider the
simple case where the client MGETs five small, static resources.  The
request is processed by a CDN node, 10 msec away from the client, that
has all but the first of those resources in memory.  That cache has to
fetch the first resource from an origin server 100 msec away.  If MGET
uses in-order responses, the mean response time will be 3x worse with
MGET than it would have been with 5 separate GETs in parallel.

It's straightforward to do out-of-order responses if MGET works as
"MSYN_STREAM" - treat the MGET as shorthand for creating the right
number of streams and applying the request headers into each of those
streams.  But that only solves the header compression problem for
requests, not for responses.  If you want to reduce response header
transfer size too, you end up having to implement both MGET and header
compression.  And MGET would create some new error modes, especially
in combination with the recently proposed control frame continuation
mechanism.

Brian

Received on Tuesday, 19 February 2013 01:20:25 UTC