Re: [docs-and-reports] Private Computation Abstraction (MPC and TEEs) (#14)

From the introduction:

> In the presence of this adversary, APIs should aim to achieve the following goals:
-Privacy: Clients (and, more specifically, the vendors who distribute the clients) trust that (within the threat models), the API is purpose constrained. That is, all parties learn nothing beyond the intended result (e.g., a differentially private aggregation function computed over the client inputs.)
-Correctness: Parties receiving the intended result trust that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known.

I suggest instead of Privacy, we make the first bullet:

Purpose Limitation: User-agents are reasonably assured that the API is purpose constrained such that no party can acquire data outputs other than what is intended and expected by the user-agent, given the inputs it provides.

Add bullets for verifiable input and auditability:

Verifiable Input: Parties using the API are reasonably assured that data provided by user-agents is accurate, reliable and honest.

Auditability: Parties providing data to, or receiving data from, the API can receive reports identifying: when, how, by whom and to whom data was communicated; when, how and by whom data and processed.

Correctness: Parties receiving the intended result can verify that the protocol is executed correctly and that the amount a result can be skewed, intentionally by adding noise, or by malicious input, is bounded, known and reported.


-- 
GitHub Notification of comment by bmayd
Please view or discuss this issue at https://github.com/patcg/docs-and-reports/pull/14#issuecomment-1284385587 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Wednesday, 19 October 2022 18:03:38 UTC