Re: Fragmentation for headers: why jumbo != continuation.

In the case of a server, it must interpret. If it can do so in a streaming
fashion, it can do so in a streaming fashion. If not, it must buffer until
it has the entire header set. That is true of fragmented and non-fragmented
headers.
In other words, the requirement to buffer or not is unchanged and is
implementation dependent.

For a proxy, it is a sender and a receiver.
Allowing fragmentation allows the sender-half of the proxy to reduce its
memory commitment.
Again, nothing changed on the receiver side, which implies that the proxy's
memory commitment is reduced.

In other words, fragmented headers leads to equivalence at servers, and a
possible improvement at proxies.
-=R


On Fri, Jul 11, 2014 at 9:55 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 12/07/2014 2:43 p.m., Greg Wilkins wrote:
> > On 12 July 2014 12:11, Roberto Peon wrote:
> >
> >> I don't like interleaving-- it multiplicatively increases the DoS
> surface
> >> (and makes it significantly worse than it was with HTTP/1)
> >
> >
> > Ah interesting.     I don't think I had understood this objection before
> -
> > as I thought that the desire to fragment was driven by the desire to
> > interleave for QoS. Hence the push to drop the reference set.    But in
> > this case you want to fragment just to avoid buffering in the sender.
>
> I think a critical detail that needs to be acknowledged by everyone
> right now is that the DoS risk is from the attacker in role of sender.
>
> Decisions which reduce the costs for senders by shifting them to
> recipients actually increases the DoS vulnerability of the whole system.
>
> Roberto, that tradeoff is a key detail within the Greg et all proposal.
> Hence by requiring low sender/high recipient costs you are actually
> arguing for increasing the DoS vulnerability in h2.
>
>
> Amos
>
>
>

Received on Saturday, 12 July 2014 05:24:50 UTC