Re: [docs-and-reports] Add private single events to areas of agreement (#43)

> That is, my contribution is, to some extent, hidden by the noise of others' contributions. We don't have a strong formalism for that, but that's a useful intuition that we might be able to rely on. My guess is that this is why aggregated values perform worse: because the noise that the training system experiences is concretely higher when aggregated, even though the formal protections remain the same

This is true for DP-SGD style learning because the noise is applied to the entire gradient in order to keep both the features and labels private, and the gradient can be huge. There could very well be aggregate training techniques in the "label DP" setting which outperform single-event queries, we just don't know of an algorithm yet. Generally speaking aggregation performs _better_ than applying noise to every input because you _don't_ have any noise in others' contributions - you just apply a O(1) noise share to the entire aggregate.

-- 
GitHub Notification of comment by csharrison
Please view or discuss this issue at https://github.com/patcg/docs-and-reports/pull/43#issuecomment-1535607132 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Friday, 5 May 2023 02:03:46 UTC