Re: HTTP HashCash

On 5/22/25 13:37, Watson Ladd wrote:
> On Thu, May 22, 2025 at 1:15 PM Demi Marie Obenour
> <demiobenour@gmail.com> wrote:
>>
>> On 5/22/25 08:59, Ben Schwartz wrote:
>>> In general, the IETF has been skeptical of "proof of work" designs that deliberately waste CPU time.  As an alternative, you may want to review Privacy Pass (RFC 9576-9578), which allows an HTTP Origin to require clients to expend a different kind of resource ("tokens") that may be limited, without learning the clients' identities.
>>
>> Does that just move the problem to the token issuer?
> 
> And from the shameless plug department, that is why privacypass exists!
> 
> Token issuers can have much better ways to issue limited use tokens:
> they may be aware of hardware support on the client to limit identify
> proliferation, or existing relationships that make bypassing
> expensive. This capabilities cannot usually be expressed over the
> Internet without significant privacy impacts (but read
> https://www.usenix.org/conference/soups2022/presentation/whalen for an
> alternative, and the accompanying SAC 21 paper to see how the crypto
> is done (in a way that's rapidly deployable: production at Internet
> scale with browser support would make different tradeoffs)).
My concern is that these methods are going to be used to deny service to
those using non-attestable open systems such as those running desktop Linux,
or to systems running alternate operating systems such as GrapheneOS.  For
users to be denied access to a website because of this, or to be forced to
upload a government-issued ID (which they might not have), would be very,
*very* bad.  Proof of work is indeed incredibly inefficient, but it does
not have these risks, as it can be passed by *any* device with enough
time or processing power.

Any serious proposal to deploy Privacy Pass for protection from scrapers
needs to specify how it is going to ensure that users can access sites
without having to use server-approved hardware, pass a CAPTCHA, or upload
a government-issued ID.  Otherwise, it will deny legitimate users access
to websites that they should have access to.  A visually disabled
user running Linux on a PinePhone who doesn’t have access to government
documents needs to still be granted access.

That said, I do think there is a significant problem with building proof
of work into HTTP, and that is that while it forces clients to consume
additional CPU time, it does *not* force them to run a full-fledged browser.
Scrapers strongly prefer to *not* run full browsers, as running a full
browser significantly increases memory requirements.
See https://old.reddit.com/r/selfhosted/comments/1jy6mug/fail2ban_400_sendmail_blocks_in_12_hours/mmxxd1v/
for where I got this information.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

Received on Friday, 23 May 2025 01:25:34 UTC