Re: Multi-GET, extreme compression?

On Mon, Feb 18, 2013 at 9:18 AM, Patrick McManus <pmcmanus@mozilla.com>wrote:

> in the spirit of producing an ID that answers the big questions - I'm
> a bit confused at what problem] a mget proposal is trying to solve
> beyond what we've got in front of us now.. is it improved compression?
>
> We already have (a few) schemes that do quite well on that count so it
> isn't an existential problem that needs solving at any cost - and this
> has a high cost associated with it. Frankly, its not clear to me that
> the compression it gives would even be competitive with the delta
> schemes - I'd like to see its proponents prove that out
>

I do not understand your argument here. Performance of MGET should be
pretty much identical to delta encoding because it is essentially the same
thing but without the complexity.


If MGET will do 80% of the requirements compression is meant to address
then my vote would be to nuke compression.

MGET is useful regardless of whether or not compression is implemented
while header compression is only an optimization.

Further, the problem with compression is that it is the type of topic that
people can bikeshed endlessly. There will never be a one true header
compression scheme. If the number of schemes allowed is more than zero then
it will inevitably end up as much more than one.


To clarify my original proposal, there are two forms of multiple get
possible

Type1: GET List< ContentID>

The first type of multiple get is simply a list of Content Identifiers
which at minimum consists of an ID but could consist of an ID plus a cache
tag if we are in a situation where this would be useful.


Type2 GET ContentID

The second type of multiple get is a single request that maps onto a set of
resources rather than just one resource. For example, accessing all the
messages in a user's email mailbox.

While this is one type of multiple get, it is not the type of multiple get
that I was considering in the context of rendering compression moot. It is
even more efficient on the request but does not allow a client side proxy
to assist with the response and does not support the case where the client
has some but not all of the resources indicated.

That might not matter of course. Client proxies were useful in 1993 when
the whole of CERN was hanging off a T1 and 99% of content was static.


I agree that the second case raises some interesting design issues. But I
would suggest that those are necessary design considerations that are
intrinsic to multiplexing/multi-streaming that have to be considered in any
case if that feature is to be supported.

-- 
Website: http://hallambaker.com/

Received on Monday, 18 February 2013 21:21:50 UTC