Threat models and data portability and ActivityPub

I recently finished a series of blog
<https://dtinit.org/blog/2024/03/12/access-content-spoofing-threats> posts
<https://dtinit.org/blog/2024/01/16/threat-model-pt-one> to catalogue
threat models for data portability.  I expect it's a little bit of heavy
reading, so maybe don't read it!  It's over general!   But I hoped it
would be useful as a list and explainer of things to consider.

So here's the list of general threats, applied specifically to the use case
of moving a user's account and activity from one ActivityPub server to
another via server-to-server pull.

*Disclosure and Indirect disclosure* -  ActivityPub doesn’t seem to require
HTTPS/TLS.  Is it assumed? Should it be required for data portability use
cases?

*Tampering* again should this require TLS?

*Non-repudiation* To the extent that the destination server trusts the
source server, repudiation shouldn’t be a big problem.  Should source
server certificates be required?

*Denial of Service*
* It must be possible for a data transfer activity itself to be halted or
throttled on either side.  One proposal: the destination initiates transfer
and requests content items — it can stop any time it reaches quotas or
other limits.  Meanwhile, the source server can throttle request handling.
This is often done at lower layers than the transfer protocol, so it may
not need any solving at the ActivityPub layer.

* Could bulk activity transfer  be used to aid DOS attacks elsewhere?  E.g.
When bulk transferring activities, the destination service should NOT
process each new item in the outbox and target and deliver those activities
to the to, bto, cc, bcc or audience fields.

*Elevation of privilege* Servers accepting new data in bulk transfer must
do the same checks for buffer overflow or similar attacks that they would
for any new data. This doesn’t need protocol work, just a note.

*Non-compliance*. Servers accepting new data must review that data for
their own compliance requirements such as meeting CSAM regulations.

*Harmful content* servers accepting new data must review that data for
their own harmful content policies.

*Spoofing* This needs careful protocol design for the S2S part - to define
how an ActivityPub server makes sure that another ActivityPub server is who
they say they are when they ask for user-approved access to non-public
data.  I’m assuming *user* anti-spoofing is through existing authentication
mechanisms.

*Permissions and access controls* - Part of this is unilateral on each
side.  On the source side, the source server should allow the user to
confirm that it is granting access to private information when a data
portability request is made.  On the destination side, the destination
server should confirm whether the imported data is going to be public or
not.    However, we do need to make sure that silent and private activities
are transferred as such.

What have I overlooked in this analysis?  What bad assumptions have I made?

Lisa

Received on Tuesday, 12 March 2024 23:03:51 UTC