I don't have a working attack in hand, but essentially it leaks info:
With an 8 byte minimum, it becomes easier to defeat the purpose of
padding-- one constructs a test which looks for an 8-byte jump in output
size to determine the actual output size.
-=R
On Mon, Apr 21, 2014 at 5:30 PM, Jeff Pinner <jpinner@twitter.com> wrote:
>
>> Presumably you could take those 16K frames and split them into 16K-9
>> frames before adding padding. You could even ask the upstream servers
>> not to produce 16K frames. You could even ask the upstream servers to
>> pad properly.
>>
>
> I could only split frames if I always split frames, otherwise I would leak
> the original message size. If I always split frames than this is equivalent
> to having an 8-byte minimum on padding. In this case we should just use a
> specific padding frame instead of adding padding fields to every data frame
> based on flags. Especially since that removes the need to check if the
> padding length exceeds the frame length.
>
> So let me ask that question then: Why is an 8-byte minimum on padding
> unacceptable?
>