W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

HTTP/2.0 Max Frame Size

From: Osama Mazahir <OSAMAM@microsoft.com>
Date: Wed, 23 Jan 2013 07:54:46 +0000
To: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <B33F11E188FEAB49A7FAF38BAB08A2C001CC12B9@TK5EX14MBXW601.wingroup.windeploy.ntdev.microsoft.com>

The current draft (http2-01) says the following about control frame sizes:
      Note that full length control frames (16MB) can be large for
      implementations running on resource-limited hardware.  In such
      cases, implementations MAY limit the maximum length frame
      supported.  However, all implementations MUST be able to receive
      control frames of at least 8192 octets in length.

What are your thoughts on having small control frames (e.g. control frames cannot exceed NNN octets and all implementations must be able to handle frames of NNN octets)?

Max Frame Size Discovery
As currently written, there is no way for an endpoint to know upfront the maximum frame size the other side will accept.  So if an endpoint emits a SYN_STREAM, for example, and gets back a RST_STREAM(FRAME_TOO_LARGE) then it either gives up or retries with a smaller SYN_STREAM (e.g. by initial splitting the initial Name/Value Header Block into multiple HEADERS frames).  I assume implementations will not get that crazy and instead just give up; not to mention that automatically retrying non-idempotent operations is badness.  Paranoid implementations may split their Name/Value Header Block, from the get-go, into multiple HEADERS frame each less than 8192 octets.

I realize that current HTTP/1.1 implementations have limits on how big the headers can get, but I view that as a separate issue from max frame size.  That is, if an implementation rejects HTTP/1.1 requests that have more than 16KB of headers than the equivalent in HTTP/2.0 would be a 16KB limit enforcement after decompression of headers.

Some frames (e.g. SETTINGS) do not have a reply to indicate that the frame was too large.  It is unclear what an implementation may do which can lead to a mess of different implementations.

One way to deal with this ambiguity would be to have each endpoint advertise its max frame size.  Or keep it simple by the spec mandating the max frame size (e.g. maximum control frame size is 8192 octets).

Discrete Frame Processing
I suspect most implementations to receive and buffer, from the TCP socket, an entire frame before processing it.  This allows for clean separation between frame receiving and frame processing.  You usually want a full control frame before propagating the results up the stack.  Which means an implementation would have to buffer a 10MB, for example, control frame before acting on it.  Or code up more complex processing as part of the receiving logic so that it detects the "too large" frame, emits an error frame (if a type exists), and then puts the parser/receiver into drain mode to pull and drop, from the TCP socket, the remainder of the frame.
Received on Wednesday, 23 January 2013 07:55:45 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:09 UTC