I don’t see what an 8-byte minimum, or dedicated pad frames, would change at all. Such an implementation is already conforming, just not of utmost quality. Checking padding length against frame length is just one simple input validation step among many others.
The way to always allow 1 extra padding byte, without bigger frames, is to require the content of any frame to leave at least 7 bytes for potential padding (6 bytes plus the PAD_LOW field). 8 bytes exactly can always be added by a frame split. The cost imposed on unpadded data transfer is 7/16383 = 0.04%.
On the other hand, if an origin or client isn’t using padding, there’s no reason for any such restrictions for the preservation of padding that doesn’t exist. Padding is strictly optional in the first place.
So, why not just mention that 7 bytes of slack are required in every frame, *if* the sender wants to allow intermediaries good flexibility in re-padding the stream. No matter what we do, some endpoints and intermediaries are going to ignore security, and getting a perfect end-to-end connection will take some alignment of the stars.
On 2014–04–22, at 10:06 AM, Roberto Peon <grmocg@gmail.com> wrote:
> I don't have a working attack in hand, but essentially it leaks info:
> With an 8 byte minimum, it becomes easier to defeat the purpose of padding-- one constructs a test which looks for an 8-byte jump in output size to determine the actual output size.
> -=R
>
>
> On Mon, Apr 21, 2014 at 5:30 PM, Jeff Pinner <jpinner@twitter.com> wrote:
>
> I could only split frames if I always split frames, otherwise I would leak the original message size. If I always split frames than this is equivalent to having an 8-byte minimum on padding. In this case we should just use a specific padding frame instead of adding padding fields to every data frame based on flags. Especially since that removes the need to check if the padding length exceeds the frame length.
>
> So let me ask that question then: Why is an 8-byte minimum on padding unacceptable?
>