Re: h2 padding

On Wed, Sep 03, 2014 at 02:22:29PM -0500, Jason Greene wrote:
> On Sep 3, 2014, at 2:00 PM, Brian Smith <> wrote:
> > On Tue, Sep 2, 2014 at 11:34 PM, Poul-Henning Kamp <> wrote:
> >> Brian Smith writes:
> >>> Consider an implementation that sends every frame in its own TCP
> >>> packet, perhaps with a 1 minute delay between frames. [...]
> >> 
> >> If this was a joke, you forgot the smiley.
> >> 
> >> If it wasn't, please explain why we should even think about entertaining
> >> the convenience of such an implementation,
> > 
> > Pretty sure I am being trolled here, but in case I'm not: It is common
> > for "security people" to give an exaggerated example to make a
> > vulnerability obvious, in order to save time debating things like "is
> > a millisecond too small to matter?" You can replace "1 minute" with "1
> > second" or virtual any other non-zero period of time and you still
> > have the same problem. Similarly, the problem still holds even if
> > every frame isn't in its own TCP packet, as long as any frame gets
> > split according to some function of the length of the padding of a
> > frame.
> I guess I don?t see how this makes a difference? If an implementation has the
> ability to fit a frame and its payload on one packet, doesn?t it have the
> ability to fit two frames on the same packet? Further, there is really no
> guarantee that an H2 frame will not be split in a way that defeats padding in
> the first place.

There are many unoptimized implementations of many protocols which do :

    write(socket, frame, length)

with TCP_NODELAY set, resulting in exactly one packet + PUSH flag sent for
each frame.

You can even see this with some HTTP servers sending headers in multiple
packets. I think this is the case Brian cares about.


Received on Thursday, 4 September 2014 06:26:22 UTC