- From: Martin Thomson <notifications@github.com>
- Date: Sun, 18 May 2025 17:38:22 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/pull/1094/review/2849143986@github.com>
@martinthomson commented on this pull request. > + +In the proposed design, the browser is given three things during enrollment: + +* a “session” resource that it can use for protocol interactions, +* which resources use cookies that are provided using the protocol, plus +* the set of cookies that can be produced. + +The site is given the public key from the site-specific key pair that the browser holds. + +In the proposed design, the browser understands that when it makes a request to one of the resources that participates in the protocol, it is expected to hold refreshed versions of the identified cookies. + +These cookies are expected to have very short validity periods. The browser is able to refresh those cookies automatically by interacting with the session resource. The main part of the protocol is the interactions between the browser and that session resource. + +Interactions with the session resource is a two-step process. The first is a simple request that requests a fresh challenge, the second posts a signature from the secret key over that challenge, thereby proving to the server that the browser still has access to the key pair. This response also refreshes any of the affected cookies. + +This adds two round trips of latency every time that a cookie refresh is needed. While some amount of delay is likely unavoidable, having two additional requests is fairly heavyweight. I don't think that it is unnecessary, though it might be depending on the strength of assurance that a site is seeking. If you consider access to the key (or TPM) to be in the threat model, then interactivity is a useful property. The problem there is that the TPM is unaware of the time, so you can't bind the signatures it produces to the clock. So access to the TPM allows an attacker the option to generate arbitrary "future" signatures, allowing it to time travel as necessary. The way that is addressed in the proposed design is with a liveness check. I'll amend the below and add some detail. > + +* a “session” resource that it can use for protocol interactions, +* which resources use cookies that are provided using the protocol, plus +* the set of cookies that can be produced. + +The site is given the public key from the site-specific key pair that the browser holds. + +In the proposed design, the browser understands that when it makes a request to one of the resources that participates in the protocol, it is expected to hold refreshed versions of the identified cookies. + +These cookies are expected to have very short validity periods. The browser is able to refresh those cookies automatically by interacting with the session resource. The main part of the protocol is the interactions between the browser and that session resource. + +Interactions with the session resource is a two-step process. The first is a simple request that requests a fresh challenge, the second posts a signature from the secret key over that challenge, thereby proving to the server that the browser still has access to the key pair. This response also refreshes any of the affected cookies. + +This adds two round trips of latency every time that a cookie refresh is needed. While some amount of delay is likely unavoidable, having two additional requests is fairly heavyweight. + +We have an alternative below that doesn't require an interactive exchange (though it could be made interactive). It also includes a redundant new field in requests that lists a session identifier. That new field could be replaced either with a per-account resource URL parameter or a non-DBSC cookie. ```suggestion We have an alternative below that doesn't require an interactive exchange. However, given that TPMs generally don't have a clock, you can't use the clock to ensure freshness. A non-interactive exchange might have been pre-generated by an attacker who temporarily had access to the TPM, unless it contains fresh entropy from the server. That's something we address in more detail in the alternative design below, noting that the alternative offers servers more options to combine requests to reduce latency, where the proposal cannot. The proposal includes a redundant new session identifier field in requests. That new field could be replaced either with a per-account resource URL parameter or a non-DBSC cookie. ``` > +* which resources use cookies that are provided using the protocol, plus +* the set of cookies that can be produced. + +The site is given the public key from the site-specific key pair that the browser holds. + +In the proposed design, the browser understands that when it makes a request to one of the resources that participates in the protocol, it is expected to hold refreshed versions of the identified cookies. + +These cookies are expected to have very short validity periods. The browser is able to refresh those cookies automatically by interacting with the session resource. The main part of the protocol is the interactions between the browser and that session resource. + +Interactions with the session resource is a two-step process. The first is a simple request that requests a fresh challenge, the second posts a signature from the secret key over that challenge, thereby proving to the server that the browser still has access to the key pair. This response also refreshes any of the affected cookies. + +This adds two round trips of latency every time that a cookie refresh is needed. While some amount of delay is likely unavoidable, having two additional requests is fairly heavyweight. + +We have an alternative below that doesn't require an interactive exchange (though it could be made interactive). It also includes a redundant new field in requests that lists a session identifier. That new field could be replaced either with a per-account resource URL parameter or a non-DBSC cookie. + +This process addresses an important concern about the frequency with which the browser needs to access the secret key. Sites are able to control how often something is signed by setting the expiration date of the cookies they produce. The browser only needs to use the secret key when cookies expire. Expirations are naturally limited because servers are unable to set extremely short expiration times without risking the cookies being completely useless to clients with bad clock skew; the granularity of expiration dates for cookies is also extremely limited in expressiveness. Sorry about the wrapping thing. It's what gdocs does by default. Probably due to GFM and its soft-breaking thing. > +Cookie: login=expired; signed=ok +Signature-Input: (...) +Signature: :...: +``` + +That resource then can validate the signature and produce an updated cookie. And likely redirect back to the original resource. + +```http +303 Finally +Location: /some/resource +Set-Cookie: login=refreshed; Secure; HttpOnly; etc=etc +``` + +Note that the server does not need to refresh the signed cookie. That cookie could be a stub that only exists to elicit a signature, so it could have a very long lifetime. + +These multi-step arrangements would result in similar amounts of delay as the process in the proposal. This approach is still better, because it follows fairly ordinary cookie handling for the most part. Any additional steps would be discretionary on the part of servers, which could sometimes choose to accept either the extra requests with signatures[^2] or the heightened risk of TPM compromise. Alternatively, servers could streamline the overall process by combining steps, at the cost of additional coordination between the different resources. In comparison, the proposed design makes an extra step unavoidable, so making this discretionary is strictly better. Maybe the focus on delays is wrong, I'll look at it closer. > +A potential challenge then is coordinating those requests so that different origins within the site, which might be only loosely coordinated through a central authentication/authorization system, don’t generate more requests too often. That can be managed by directing refresh requests to a resource on that system that does not have `Signed` cookies associated with it. That resource can coordinate any cookie refreshes, forwarding requests to the affected path as necessary. + +Another challenge is in demonstrating liveness for the signature. TPMs don’t generally have clocks, so if a device is compromised so that an attacker gains access to the TPM, the attacker could generate an arbitrary number of signatures for future use. However, this requires that the attacker predict the times and URLs where those signatures would be needed. This suggests a similar pattern to solve that potential problem also: the server redirects to a new endpoint with fresh randomness in the URL for signing. + +For example, both requirements could be addressed as shown below. This example is expanded to include the maximum number of exchanges possible to fully illustrate all of the capabilities. + +```http +GET /some/resource +Cookie: login=expired +``` + +Which results in a redirection to a login endpoint, as would be part of a normal centralized login flow (i.e., this would be a perfectly normal part of refreshing cookies): + +```http +303 Over Yonder +Location: /login The server knows that. It's just that servers often ask clients to hold stuff while they are busy. I thought it would be distracting, but now I see that the opposite is true. I'll fix it up. > +303 Finally +Location: /some/resource +Set-Cookie: login=refreshed; Secure; HttpOnly; etc=etc +``` + +Note that the server does not need to refresh the signed cookie. That cookie could be a stub that only exists to elicit a signature, so it could have a very long lifetime. + +These multi-step arrangements would result in similar amounts of delay as the process in the proposal. This approach is still better, because it follows fairly ordinary cookie handling for the most part. Any additional steps would be discretionary on the part of servers, which could sometimes choose to accept either the extra requests with signatures[^2] or the heightened risk of TPM compromise. Alternatively, servers could streamline the overall process by combining steps, at the cost of additional coordination between the different resources. In comparison, the proposed design makes an extra step unavoidable, so making this discretionary is strictly better. + +## Communicating Keys + +Enrollment can almost be a side effect of creating and first use of a `Signed` cookie. The only requirement here is that the browser learns what types of keys are acceptable to the server and that the server learns the public key that the client uses. + +Any `Set-Cookie` header that establishes a `Signed` cookie could list the key types in the `Signed` attribute, but the `Accept-Signature` field exists for negotiating the use of signature keys. The server should therefore use `Accept-Signature`. + +The `Cookie` header that the browser subsequently sends will be signed. That same message can include the public key from the key pair. That’s usually not something that can be included in the signature as defined in the current RFC. For that, we might define a new `Signature-Public-Key` field to carry the necessary information. We could define a new `Signature` field parameter, but that could be confused with `keyid`. That's not at all what this is saying, it's saying that there is a need to define a way to communicate a public key and that is something the message signature RFC doesn't solve. -- Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/pull/1094#discussion_r2094673374 You are receiving this because you are subscribed to this thread. Message ID: <w3ctag/design-reviews/pull/1094/review/2849143986@github.com>
Received on Monday, 19 May 2025 00:38:26 UTC