- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Thu, 20 Oct 2016 13:14:19 +1100
- To: Julian Reschke <julian.reschke@gmx.de>
- Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, Mark Nottingham <mnot@mnot.net>, HTTP working group mailing list <ietf-http-wg@w3.org>, Patrick McManus <pmcmanus@mozilla.com>
On 20 October 2016 at 01:46, Julian Reschke <julian.reschke@gmx.de> wrote: > But how would you handle the case describes above -- where the metadata > (content type, encryption material) is served from a server different from > the one having the (encrypted) payload? You know, I had the same thought as PHK after making that statement and I can't think of a reason he is wrong. What was relevant is that you need to *know* more stuff to get crypto right, but that's just the key. You don't need to parameterize the content coding otherwise. If rs, salt and keyid were in the payload, I can't see how that would be a real problem. They are public information and inline avoids a whole mess of issues. Every secondary server would be serving the values, but that's not fatal. You wouldn't be able to "compress" them by including one value across all potential secondaries (again, not a real problem, and potentially a feature if you wanted different encryptions across secondaries to avoid correlation). The best I could come up with is that random access always requires the first few octets. But that's weak: it's either metadata or some of the payload. And we already take on that burden in other places: it's actually a common restriction on resources that are acquired piecemeal. Some media container formats require the end to make sense of the middle, which is much more tiresome. Zip archives are also like that. Now, I don't know what to do about webpush, but if we are going to make breaking changes, I can maybe get some efficiency gains from them which should help there. A new name should help avoid the worst parts of the churn.
Received on Thursday, 20 October 2016 02:14:49 UTC