Re: Last Call: <draft-ietf-httpbis-http2-16.txt> (Hypertext Transfer Protocol version 2) to Proposed Standard

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 12/01/2015 12:14 p.m., Julian Reschke wrote:
> Martin Thomson wrote:
>> ... On 6 January 2015 at 05:13, Stefan Eissing wrote:
>>> ... 4. SETTINGS_MAX_HEADER_LIST_SIZE as advisory It seems
>>> undefined what a client (library) should do with it. Will this
>>> not give rise to interop problems if one client respects it
>>> and fails requests immediately while another does no checks and
>>> sends them anyway? MUST a peer that announces a limit always
>>> reply with 431 to requests that exceed it?
>> 
>> Yes, this is a little nebulous, but intentionally.  If you
>> consider an end-to-end protocol with multiple hops, the value
>> that is actually enforced is the lowest value from all of the
>> servers in the path of a request.  Since each request might
>> follow different paths, the best that *this* protocol can do is
>> provide information on the value enforced by the next hop (who
>> knows if the next hop is even HTTP/2).
>> 
>> The server is not required to send 431 if the size exceeds this:
>> maybe some resources can handle streamed header fields, maybe
>> some resources are forwarded to different origin servers.
>> 
>> If you can't think of a concrete action to take based on this
>> setting, I would ignore it.
> 
> Do we have any implementations that actually do something with
> this setting?

Squid is. The combination of CONTINUATION frame and dense HPACK
compression allows senders to allocate a headers block up past the GB
*per message* size ranges. Hardware that comes with TB of RAM is
pretty scarce still.

Also, I believe any "Internet of Things" implementation that wants to
limit the amount of memory it has to allocate for HPACK output to
below 16MB without first being forced to allocating memory and CPU
decompression cycles to HPACK will be sending this.

431 is "just" a way of HTTP/2 implementations responding to
overly-large headers (in both compressed and decompressed form) with a
stream error instead of a connection error affecting all other
streams. For an intermediary or server which is multiplexing hundreds
or thousands of clients traffic on one connection making these stream
errors is a rather big deal, for others its a nice optimization.

Amos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUs1qEAAoJELJo5wb/XPRjKUkIALcu/UJuCGPh2u9UpeHQ//Cq
A2vJXLEfIPL48AErwxLPPM3v9mEwknxF94+RpVedyk5/MM83Fl63Q9zBpPmFdbVc
z1oY60Tja9NxyOWPB6fSArgG+x44mvNsSCFyNtISmBBYZKm+NoTPCD/qldhON29v
xZLdVK7A6mgOjSADGBCkJyomtmLAaJcKFmkKkyEjPgsmJEqFDbtwfFlgNWWL9UdX
vqMmRqQN+87e19fhI3AT7Kv4yIyAiU3mk3EmAfkAoTJf8KaPoZ4K4SmTxMvJspUV
7Hbh/esRtap/jxeFjOmig2uwMjTA4/oeVbKm34RZdpbNjSo/qXUiOOVVCpXckyA=
=eKN7
-----END PGP SIGNATURE-----

Received on Monday, 12 January 2015 05:24:58 UTC