Re: [docs-and-reports] Privacy and Purpose Constraints (#15)

> ...I don't think we can drop Privacy entirely though.

@eriktaubeneck I find Privacy difficult to make meaningful or verifiable assertions about and was endeavoring to find an alternative that could be technically addressed. APIs can make assertions about, and report on, data inputs, processing and outputs, but regarding:

> ...all parties learn nothing beyond the intended result...

I think suggesting what might or might not be learned is problematic -- an API can only know what data is reported, it can't know how the data is applied and in most cases the intent is presumably to use the outputs to inform an understanding of a larger context. The question then becomes: are there cases in which increased understanding of the larger context could constitute a violation of privacy? I don't think it is a question that can be answered in the context of the API or a standard, but rather see it as a policy matter.

> I'm less use [sure?] about adding "Verifiable Input" and "Auditability". "Verifiable Input" seems like a tactic for achieving correctness, and "Auditability" seems like a tactic for trusting that a system is purpose limited. Curious what others think here.

In the context of the initial statement:

> In the presence of this adversary, APIs should aim to achieve the following goals:

I was including them because I believe they are goals that must be achieved in order for the model to be considered reliable and trustworthy in the face of adversaries seeking to corrupt it or apply it in violation of stated terms.

-- 
GitHub Notification of comment by bmayd
Please view or discuss this issue at https://github.com/patcg/docs-and-reports/issues/15#issuecomment-1286130812 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Thursday, 20 October 2022 20:47:29 UTC