- From: Eliot Lear <lear@cisco.com>
- Date: Wed, 28 May 2014 14:38:08 +0200
- To: Michael Sweet <msweet@apple.com>, "Richard Wheeldon (rwheeldo)" <rwheeldo@cisco.com>
- CC: Nicholas Hurley <hurley@todesschaf.org>, Martin Thomson <martin.thomson@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <5385D8B0.7080406@cisco.com>
Hi Michael, Are the printers ones whose implementation you control? Also, do you have some reason to believe that they will ever go to HTTP/2? What I'm getting at is if they're not going to do HTTP/2 anyway, then this isn't an issue. If they ARE going to HTTP/2 and they are not in your control, that is not your problem. If they ARE in your control, this amounts to a COGS argument. Does the COGS argument rise to an implementation barrier? Speaking more broadly, and not really addressing Michael, as I've repeatedly said, I think this comes down to applicability of the protocol. Quite frankly it was not designed to make printing faster or more efficient, and to the best of my knowledge nobody has tested for that case. This is not a fault of the protocol. Good engineers constrain their design space. HTTP/1.1 has suffered from wild success (RFC 5218) and is well beyond its design space. I see nothing wrong in constraining the design space, so long as what's left is useful to a significant population. Eliot On 5/28/14, 2:07 PM, Michael Sweet wrote: > Richard, > > Many of the printers I work with are still using *16-bit* > microcontrollers, so asking them to do Huffman encoding/decoding on > top of everything else is not an easy sell (they can barely manage as > it is). But regardless of the class of device, Huffman is going to be > much slower than uncompressed, and there should be a way for HTTP/2 to > adapt (even a little bit) to the capabilities of each device in the > chain, particularly if network bandwidth is not a constraint (think > local access to devices and the first hop to a proxy). > > > On May 27, 2014, at 7:34 PM, Richard Wheeldon (rwheeldo) > <rwheeldo@cisco.com <mailto:rwheeldo@cisco.com>> wrote: > >> Realistically, the smallest embedded device are still orders of >> magnitude more powerful that the computers I was using at Uni or >> before then. If running compression was a good idea on a 286 or M68K, >> I’m struggling to see what classes of device you’d be concerned about >> where compression is a bad idea but where HTTP would still be a >> sensible choice? >> >> >> >> I also don’t see a case in which enforcing support for compression of >> data (C-E : GZip et al.) is a good idea but compression of headers >> isn’t, unless you have some data or observation which suggests that >> HPACK is significantly worse than GZip in terms of performance? >> >> >> >> Richard >> >> >> >> *From:*Michael Sweet [mailto:msweet@apple.com] >> *Sent:* 27 May 2014 13:53 >> *To:* Nicholas Hurley >> *Cc:* Martin Thomson; HTTP Working Group >> *Subject:* Re: Negotiating compression >> >> >> >> -1 >> >> >> >> You might be able to fix your end, but how do you tell the other side >> to stop? >> >> >> >> Right now you can set the header table size to 0 (good), but you >> can't disable Huffman (not so good). The fix for Huffman would just >> be a parameter in the initial SETTINGS frame. >> >> >> >> And the issue is not just complexity but overhead - Huffman coding >> alone requires relatively slow bit manipulations, the header tables >> add to the memory overhead of every connection, and proxies get to do >> header processing twice... Not a big deal on a desktop machine with >> a dozen connections, but embedded devices and proxies have tighter >> constraints. >> >> >> >> >> >> On May 27, 2014, at 2:31 PM, Nicholas Hurley <hurley@todesschaf.org >> <mailto:hurley@todesschaf.org>> wrote: >> >> >> >> +1 >> >> HPACK is not so horrendously complex that it should be considered a >> barrier to entry (it's actually pretty simple, even including Huffman >> encoding). Plus, I can always start sending only literal encodings if >> the security situation suddenly becomes an issue. >> >> Not to mention the fact that, the more things that can be negotiated, >> the more possibility there is for state mismatches at each endpoint, >> and the more possibility there is to break interop (even with ACKed >> SETTINGS). >> >> >> -- >> Peace, >> >> -Nick >> >> >> >> On Tue, May 27, 2014 at 10:54 AM, Martin Thomson >> <martin.thomson@gmail.com <mailto:martin.thomson@gmail.com>> wrote: >> >> The long and rambling thread on schedule has again started to discuss >> HPACK. A point was made regarding negotiation for its use. >> >> I don't think that negotiation is necessary. The argument regarding >> the physics, which would dictate the use of an entire RTT for >> negotiation, is compelling, but I have others. The only reason you >> want negotiation is if you want to be able to influence the behaviour >> of a counterparty. >> >> A sizable advantage can be gained by modifying your own behaviour, >> which HPACK always permits. Given that the data you care most about >> protecting is usually the stuff that you send, I'm willing to bet that >> this is good enough in the unlikely event that an attack is >> discovered. >> >> >> >> >> >> _________________________________________________________ >> Michael Sweet, Senior Printing System Engineer, PWG Chair >> >> >> > > _________________________________________________________ > Michael Sweet, Senior Printing System Engineer, PWG Chair >
Received on Wednesday, 28 May 2014 12:38:41 UTC