RE: Http/2 dynamic table update clarification

The encoder can always choose to use less memory, or no memory at all, so it's relatively safe.  The decoder has to blindly follow what the encoder says, so its memory allocation is the risk the protocol is protecting.  The decoder expresses in SETTINGS the largest amount of memory it's willing to use.  The encoder declares in HEADERS the size that it's actually using (which must be the same or less as what the decoder allowed, or the decoder will kill the connection).  It can also change that whenever it wants, so long as it stays under the decoder's limit.

So in your capture, the sender of the SETTINGS frame is expressing that it's willing to let the receiver's encoder go as high as 64KB.  You'd need to look at the HEADERS frames in the other direction to see what size is actually used.

From: Semyon Golubcov [mailto:semyon.golubcov@mail.ru]
Sent: Monday, July 10, 2017 4:43 PM
To: ietf-http-wg@w3.org
Subject: Http/2 dynamic table update clarification

Hello, i'm an amateur developer from Russia, i'm trying to understand http/2 protocol, looking at wireshark dump of the Moziila Firefox browser.
There is the following statement for dynamic table size :

SETTINGS_HEADER_TABLE_SIZE (0x1):  Allows the sender to inform the

      remote endpoint of the maximum size of the header compression

      table used to decode header blocks, in octets.  The encoder can

      select any size equal to or less than this value by using

      signaling specific to the header compression format inside a

      header block (see [COMPRESSION]).  The initial value is 4,096

      octets.
The initial size for both encoder and decoder is 4096 bytes according to RFC.
In SETTINGS frame in wireshark, i can see the new table size passed to the ENDPOINT ( google.com<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgoogle.com&data=02%7C01%7CMichael.Bishop%40microsoft.com%7Cfeb16c580835497c0f7608d4c82f5c7b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636353553120547384&sdata=QD%2Bx5C5KBvH%2FJIFxTj41pbYT0C9DeRcXvyRDN263%2FrM%3D&reserved=0> in this case )

0000   00 00 12 04 00 00 00 00 00 **00 01 00 01 00** 00 00

0010   04 00 02 00 00 00 05 00 00 40 00
00 01 00 01 00 is a pattern for SETTINGS_HEADER_TABLE_SIZE = 65536
What i can't understand does it actually tells the endpoint that the dynamic table used to decode the headers from this ENDPOINT inside browser is 65536 bytes long, or does it tell the ENDPOINT that ENDPOINT dynamic table size should be 65536 ?
And reversed, i assume that the ENDPOINT must sent SETTINGS_HEADER_TABLE_SIZE to tell the browser its dynamic table used for decoding the headers from ENDPOINT but i don't see that option sent back by the ENDPOINT. Can someone explain this?
Also there is a signal for dynamic table size update, mentioned in RFC, which is sent inside the HEADERS frame.

 A dynamic table size update starts with the '001' 3-bit pattern,

   followed by the new maximum size, represented as an integer with a

   5-bit prefix (see Section 5.1).



   The new maximum size MUST be lower than or equal to the limit

   determined by the protocol using HPACK.  A value that exceeds this

   limit MUST be treated as a decoding error.  In HTTP/2, this limit is

   the last value of the SETTINGS_HEADER_TABLE_SIZE parameter (see

   Section 6.5.2 of [HTTP2]) received from the decoder and acknowledged

   by the encoder (see Section 6.5.3 of [HTTP2]).
There is this line received from the decoder and acknowledged by the encoder, so does this signal is sent to limit the encoding dynamic table size ? I comletely lost, and it is not obvious from wireshark captures how this is handled correctly


--
Semyon Golubcov

Received on Tuesday, 11 July 2017 16:27:39 UTC