Re: Negotiating compression

Richard,

Many of the printers I work with are still using *16-bit* microcontrollers, so asking them to do Huffman encoding/decoding on top of everything else is not an easy sell (they can barely manage as it is).  But regardless of the class of device, Huffman is going to be much slower than uncompressed, and there should be a way for HTTP/2 to adapt (even a little bit) to the capabilities of each device in the chain, particularly if network bandwidth is not a constraint (think local access to devices and the first hop to a proxy).


On May 27, 2014, at 7:34 PM, Richard Wheeldon (rwheeldo) <rwheeldo@cisco.com> wrote:

> Realistically, the smallest embedded device are still orders of magnitude more powerful that the computers I was using at Uni or before then. If running compression was a good idea on a 286 or M68K, I’m struggling to see what classes of device you’d be concerned about where compression is a bad idea but where HTTP would still be a sensible choice?
>  
> I also don’t see a case in which enforcing support for compression of data (C-E : GZip et al.) is a good idea but compression of headers isn’t, unless you have some data or observation which suggests that HPACK is significantly worse than GZip in terms of performance?
>  
> Richard
>  
> From: Michael Sweet [mailto:msweet@apple.com] 
> Sent: 27 May 2014 13:53
> To: Nicholas Hurley
> Cc: Martin Thomson; HTTP Working Group
> Subject: Re: Negotiating compression
>  
> -1
>  
> You might be able to fix your end, but how do you tell the other side to stop?
>  
> Right now you can set the header table size to 0 (good), but you can't disable Huffman (not so good). The fix for Huffman would just be a parameter in the initial SETTINGS frame.
>  
> And the issue is not just complexity but overhead - Huffman coding alone requires relatively slow bit manipulations, the header tables add to the memory overhead of every connection, and proxies get to do header processing twice...  Not a big deal on a desktop machine with a dozen connections, but embedded devices and proxies have tighter constraints.
>  
>  
> On May 27, 2014, at 2:31 PM, Nicholas Hurley <hurley@todesschaf.org> wrote:
> 
> 
> +1
> 
> HPACK is not so horrendously complex that it should be considered a barrier to entry (it's actually pretty simple, even including Huffman encoding). Plus, I can always start sending only literal encodings if the security situation suddenly becomes an issue.
> 
> Not to mention the fact that, the more things that can be negotiated, the more possibility there is for state mismatches at each endpoint, and the more possibility there is to break interop (even with ACKed SETTINGS).
> 
> --
> Peace,
>   -Nick
>  
> 
> On Tue, May 27, 2014 at 10:54 AM, Martin Thomson <martin.thomson@gmail.com> wrote:
> The long and rambling thread on schedule has again started to discuss
> HPACK.  A point was made regarding negotiation for its use.
> 
> I don't think that negotiation is necessary.  The argument regarding
> the physics, which would dictate the use of an entire RTT for
> negotiation, is compelling, but I have others.  The only reason you
> want negotiation is if you want to be able to influence the behaviour
> of a counterparty.
> 
> A sizable advantage can be gained by modifying your own behaviour,
> which HPACK always permits.  Given that the data you care most about
> protecting is usually the stuff that you send, I'm willing to bet that
> this is good enough in the unlikely event that an attack is
> discovered.
> 
>  
>  
> _________________________________________________________
> Michael Sweet, Senior Printing System Engineer, PWG Chair
>  

_________________________________________________________
Michael Sweet, Senior Printing System Engineer, PWG Chair

Received on Wednesday, 28 May 2014 12:08:14 UTC