Re: PoW (Re: Attack research on HTTP/2 implementations)

Thanks Willy, well stated.

The way out of the medieval internet is not for everyone to move behind the largest city walls. ;-)

- Stefan

> Am 04.09.2021 um 07:44 schrieb Willy Tarreau <w@1wt.eu>:
> 
> This whole thread is particularly bizarre, but let me share my experience
> here.
> 
>> On Fri, Sep 03, 2021 at 11:53:10AM -0400, Erik Aronesty wrote:
>> a flexible, intelligent protocol could make it infeasible for an
>> attacker to bring down a server, while allowing regular traffic to
>> proceed virtually unhindered
> 
> This is not true in practice. What matters is not the client-to-server
> ratio of work, but the computation delay inflicted to clients so that
> the server still has some resources left to work. With something like
> PoW, you do not prevent clients from attacking too fast, you only shift
> the offset at which the server will have to do lots of work. PoW is only
> one tool serving to delay clients, not a solution. It is useful to
> protect against small attacks but not against large ones.
> 
> Let's say your server can handle 10k connections per second. This means
> that regardless of the number of clients, you must make sure the server
> does not deal with more than 10k valid connections per second. When you
> have 10k clients in front of you, it means you have to inflict on average
> one second of work to all of them to keep the incoming rate low. This
> can be tolerable to a valid client during an attack. I've personally
> gone as high as 0.5s which really feels slow but compared to a dead
> server it's not that bad.
> 
> But once the number of attackers increases, the ratios are not acceptable
> anymore. Say you have 100k clients, you need to inflict them 10s of slow
> down. So suddenly you slow down everyone to a level that valid clients
> are not willing to accept anymore. And guess what ? The attackers will
> not mind waiting for 10s of computation to finally access your site,
> because if their goal is to take your site off-line, what matters for
> them is not that the servers are down but that clients cannot access
> them. With their extra load that results in inflicting very long PoW
> to clients, they simply succeed.
> 
> This is why I'm saying that it only shifts the offset and does not
> solve everything. In other PoW models, you don't care much about the
> time it takes to solve the challenge. On the web there is an upper
> limit that is the users' acceptance, and it is extremely low (in the
> order of one second). And finding sufficient clients to deal with
> this from an attacker's perspective is trivial.
> 
> In addition, the simple act of delivering the challenge to the client
> requires some work, and this work is a non-negligible ratio of the
> TLS calculation (typically 1/10 to 1/100), so that it remains very
> easy to take down a server by forcing it to just deliver many
> challenges. And in this case the client-to-server work ratio comes back
> to 1:1 approximately. This is even why attacks such as SYN floods, ICMP
> floods or UDP floods remain so popular these days.
> 
> When you only want to deal with low-capacity attacks (those that PoW
> can help against), there are other, much simpler approaches, which
> already offset the client-to-server work. Just use ECDSA certificates.
> From what I've seen in field, they require about 5 times less work on
> the server while at the same time the client requires roughly 15 times
> more than the server. This simply allows to significantly raise the
> server's limits and bring them closer to other subsystems that are
> involved (e.g. the TCP stack and/or stateful firewalls).
> 
>> but i'm arguing with people who have conflicts of interest, so i'm done.
> 
> This assertion is particularly strange considering that all of us here
> have for interest to keep our respective components interoperable around
> the same protocol definition in order to minimize our bug reports and
> keep users happy.
> 
> However, keep in mind that your approach of PoW would only be effective
> for very large operators, those that can deploy 100s to 1000s of servers
> to absorb moderately large botnets, and would not help single-server
> sites that remain trivial to knock down. I personally don't think we
> should encourage all internet hosting to be brought to ultra-large
> companies who could afford to protect it, and that's what your approach
> would inevitably result in because it is the only one that can be
> demonstrably effective there. It is not internet I'm dreaming of,
> personnally.
> 
> Regards,
> Willy
> 

Received on Saturday, 4 September 2021 06:10:17 UTC