Re: Negotiating compression

The smallest embedded devices today are still 8 bit, 8051 derivatives. The smallest I’ve seen with an Ethernet interface has a M68K (ColdFire) core, 128K of program space, 32K RAM. It comes with a free HTTP client. IIRC the included demo is a webserver, but I don’t know whether that fits into that particular MCU.

In the internet of things, HTTP is a sensible choice for anything, and you can be certain the device will be more susceptible to firewalls if it tries to use anything else.

Better applicability to the embedded domain is the most attractive thing about HTTP/2 to me. However, although I haven’t yet implemented HPACK, the Huffman code doesn’t look particularly scary. My main concern is that it’s not very efficient. As for the rest, it goes away when you set the table size to zero, no?

And if the total variety of headers ever sent by the client fits into 4K, it can in theory verify that the server header table is at least so much, and then hard-code the transmitted headers against the presumed remote state, rather than actually encoding them from scratch.

I’ve not implemented HPACK, but what works for a big server handling a million connections should usually work for a tiny MCU.


On 2014–05–28, at 7:34 AM, Richard Wheeldon (rwheeldo) <rwheeldo@cisco.com> wrote:

> Realistically, the smallest embedded device are still orders of magnitude more powerful that the computers I was using at Uni or before then. If running compression was a good idea on a 286 or M68K, I’m struggling to see what classes of device you’d be concerned about where compression is a bad idea but where HTTP would still be a sensible choice?
>  
> I also don’t see a case in which enforcing support for compression of data (C-E : GZip et al.) is a good idea but compression of headers isn’t, unless you have some data or observation which suggests that HPACK is significantly worse than GZip in terms of performance?

Received on Wednesday, 28 May 2014 01:59:49 UTC