Re: New Draft: draft-ohanlon-transport-info-header



> On 25 Nov 2019, at 17:51, Patrick McManus <mcmanus@ducksong.com> wrote:
> 
> On Mon, Nov 25, 2019 at 11:50 AM Piers O'Hanlon <piers.ohanlon@bbc.co.uk> wrote:
> 
> The only transactions that traverse that connection are between the client and the “origin”.
> 
> no - connections are not limited to a single origin as far back as the h1 proxy use case and its definitely not true in h2/h3 at all. And even if it were, that's not enough protection. HTTP exchanges are explicitly stateless - mechanisms that ignore that to create some kind of ambient state always cause trouble (e.g. ntlm).

Thanks for the clarifications. I did mention in the draft that there may be issues with the use of client proxies, but as Lucas and you have pointed out the aspect of connection reuse by H2/3 means that this is more of a general issue.

Although this kind of attack would be limited to domains that share the same origin where such coalescing could occur - but I agree that this may not be insignificant. However, the H2 RFC [rfc7540#section-9.1.1], does also warn that with connection reuse "it is possible for clients to send confidential information to servers that might not be the intended target for the request, even though the server is otherwise authoritative” so there are already issues with cross-origin protection in this scenario. So one could say that there may be less confidential transactions sharing such a link so the threat might be considered as lower.

Another approach could be for an origin utilising the header could be set up to disallow cross-origin connection reuse by returning a 421 response to thwart abuse.

> 
> I want to put up a flag here and say that even same-origin exchanges need to be isolated from each other. Two different sets of login cookies on the same origin don't share the same security context.
>  
This a good point - I guess the browser needs to handle this correctly when sharing a connection between, for example, two tabs. 

> The transactions from the same origin to other clients would be over a different transport connection so the cwnd would be different.  Transactions that don’t share the same origin would be a different connection and contain yet another cwnd. 
> 
> 
> no.
>  
> *when I say “origin” I mean the last hop edge node of whatever CDN serves the origin.
> 
> 
> origin is the combination of scheme host and port. A CDN edge serves many origins.
> 
Sure - I guess I didn’t put it very well - I was trying to say that the Transport-Info header would be only be transferred between that CDN edge and the client - it’s a hop-by-hop header - for the last last hop. I guess the CDN edge could employ connection reuse where appropriate.


> Apologies if I’m missing your point but from what have you have described the side channel isn’t clear to me?
> 
> 
> obviously the HTTP protocol is aware of this sharing - indeed it initiates it! but the ramifications of it are generally opaque to the semantics of HTTP exchanges (roughly expressed by headers and messages). So perhaps what you want is better carried in the protocol (e.g. as a frame).. if you are proposing exposing it to content it needs to be scrutinized for the same reason that it is useful. This can be subtle - see CRIME and BREACH for example.
> 
Yes I had considered that frame based transport might be better but what is also required is to extract the transport/flow information for that stream - but if one could do that then it would potentially be ok to transport it in a header as the issue is that there is access to shared state.

It could be argued that one could achieve similar kind of fingerprinting by the client measuring the response times of requests. This would provide similar information as the cwnd isn’t static so it’s not going to be exactly the same between responses. Although it is a little different as with the transport-info one has the throughput for the whole connection.

Also for H2/3 since this is an issue about transport rate the fact padding is used on each connection means that there is always going to be some uncertainty introduced by padding in such attack. Such an attack where one flow to try to learn about another flow it would need to calculate it’s own portion of the flow so it could subtract it from the total flow rate provided by the header but’s going to be limited by not knowing about the padding.

Another potential mitigation technique would be to add some noise to the measurements which would make it harder for collusion to occur with coordinated attacks. The choice of the level of noise could be a function of the number of flows the server knows are sharing the connection, or maybe just some randomness so that two simultaneous requests don’t obtain the same result - although in practice they’re not going to obtain the same result as the connection parameters are constantly changing, although such random noise could mitigate attempts to compare trends.

Piers

> hth
> 
>  
> Piers
> 
> > On Mon, Nov 25, 2019 at 7:19 AM Piers O'Hanlon <p.ohanlon@gmail.com> wrote:
> > Hi Patrick,
> > 
> > Thanks for your feedback.
> > 
> > I'm assuming most cross-origin issues should be dealt with by CORS but
> > were you concerned with that suggestion that the header could be
> > contained in an OPTIONS response? I can see that since such a response
> > is not subject to CORS - such as in a CORS pre-flight request - then
> > we could drop that as an approach and just keep to using HEAD requests
> > as a mechanism to obtain the Transport-Info.
> > 
> > Let me know what if that doesn't address your issue or if you have any
> > other concerns.
> > 
> > Piers
> > 
> > 
> > On Fri, 22 Nov 2019 at 21:41, Patrick McManus <mcmanus@ducksong.com> wrote:
> > >
> > > To the extent that this leaks information across origins to js its probably a problem too.
> > >
> > > On Fri, Nov 22, 2019 at 8:58 PM Piers O'Hanlon <p.ohanlon@gmail.com> wrote:
> > >>
> > >> Hi Lucas,
> > >>
> > >> Thanks for bring that up - as you say Nginx's default for
> > >> http2_max_requests is 1000, although it can be changed (and appears to
> > >> be done so for RPC applications). I'm not sure how many other server
> > >> implementations do this?
> > >>
> > >> Firstly, we can detect this through of the use of the dstport
> > >> parameter - as a new TCP connection would use a different port.
> > >> Although it could potentially lead to temporary loss of information
> > >> for a time as discussed below.
> > >>
> > >> Secondly, the affect of this would be only apparent each time the
> > >> request count is exceeded - so say every 1000 requests - when a switch
> > >> over would occur. When a switch over does occur then it depends on the
> > >> comparative duration of the data responses of interest, versus how
> > >> often one wants to perform parallel HEAD/OPTION requests for
> > >> Transport-Info. So for the case where where the frequency of parallel
> > >> requests is about the same then I think it shouldn't matter much as
> > >> with most server systems these days the congestion control parameters
> > >> are cached in the kernel so a subsequent connection to the same
> > >> destination would be preloaded with the cached metrics so the
> > >> Transport-Info header would contain these. In the case where there's a
> > >> series of long running responses then it might be an issue as after
> > >> switch over point there would also be two parallel TCP connections to
> > >> the same point but they would exist separately for a longer period so
> > >> potentially the metrics obtained via subsequent HEAD/OPTIONS could be
> > >> different as the cwnd can be reduced for low volume flows, though
> > >> these would generally not be used since the dstport would not match so
> > >> there could be a loss of information in this case.
> > >>
> > >> Cheers
> > >>
> > >> Piers
> > >>
> > >> On Fri, 22 Nov 2019 at 10:51, Lucas Pardue <lucaspardue.24.7@gmail.com> wrote:
> > >> >
> > >> > Hi Piers,
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > On Fri, 22 Nov 2019, 17:58 Piers O'Hanlon, <p.ohanlon@gmail.com> wrote:
> > >> >>
> > >> >> Hi all,
> > >> >>
> > >> >> - “It only provides information per response which isn’t very often”
> > >> >> We mention in the draft that with H2+ one can send and arbitrary number of requests (using OPTIONS/HEAD) to obtain more measurements responses per unit time.
> > >> >
> > >> >
> > >> > I have an observation but not sure it belongs in a document. Implementations such as nginx have a soft max number of HTTP/2 requests before closing the connection (default 1000 last I checked). If you're trying to sample the transport info to frequently you may end ip blowing it up. This seems unfortunate because the client use case is designed around making smarter transport related decisions.
> > >> >
> > >> > Cheers
> > >> > Lucas
> > >> >
> > >> >>
> > >> >> We would welcome any more feedback by email and/or Github issues
> > >> >>
> > >> >> Thanks,
> > >> >>
> > >> >> Piers O'Hanlon
> > >>
> > 

Received on Tuesday, 26 November 2019 17:12:27 UTC