- From: Roy T. Fielding <fielding@gbiv.com>
- Date: Mon, 26 Aug 2013 11:47:11 -0700
- To: Roberto Peon <grmocg@gmail.com>
- Cc: Salvatore Loreto <salvatore.loreto@ericsson.com>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-Id: <3E88C2FC-06B7-48AF-B5F1-81A34325320D@gbiv.com>
There seems to be a disconnect here. The authority for https has no actual connection to the authority for http. Allowing one port to answer for another is a security hole. TLS does not make it more secure. Neither does DNS, nor any other means of hinting ports. If you trust a port other than the one defined by the URI, then you allow a site's trivial poisoning protections to be bypassed. ....Roy On Aug 25, 2013, at 11:25 PM, Roberto Peon <grmocg@gmail.com> wrote: > I'm all for people working out how to do explicit proxy configuration. > I am pro-proxy, so long as the customers want proxies and are able to exercise choice about it. > I do believe, however, that we could do substantially better w.r.t. caching than we do today-- I just don't have the bandwidth to deal with that and this effort simultaneously. :) > > In any case, I suspect that the entirety of the complexity here comes down to a MAYs and two MUSTs, for instance: > > An HTTP/2 client MAY send resources with an HTTP scheme down an encrypted connection at any time. > An HTTP/2 server MAY choose not to process such a request, however if it chooses to refuse such a request it MUST respond with (new) error code XXX - scheme unsupported on connection, which indicates that the request was not processed and is safe to retry on an unencrypted connection. > > -=R > > > On Sun, Aug 25, 2013 at 10:59 PM, Salvatore Loreto <salvatore.loreto@ericsson.com> wrote: >> On 8/25/13 10:43 PM, Roberto Peon wrote: >>> >>> >>> >>> On Sun, Aug 25, 2013 at 1:27 PM, Salvatore Loreto <salvatore.loreto@ericsson.com> wrote: >>>> On 8/25/13 10:18 PM, Roberto Peon wrote: >>>>> We've seen that the network delays bytes on some ports because (we assume) of inspecting proxies, even when the data is incomprehensible to the proxy. >>>> maybe I am naif but the delay can be just because the proxies (lets just talk of HTTP here) expect HTTP traffic and they get confuse when they see something else >>> >>> Oh it was worse that that :) Put correct HTTP over port 80 and it often won't end up at the other side because you used a portion of the spec that is not often used. >> correct, there are around stacks that are incomplete or not very well tested and it is a problem, I concur >> but that is for historical reasons and most likely the main reason of this situation is because some portion of the spec has not been used so often (if at all) till now! >> if they start to be used, things will change even if slower compared to the browser update >> >> >>> >>>> >>>> >>>>> If the bytes are merely signed then the bytes are visible and modification is still performed-- the modification becomes time-domain, e.g. dropping/delaying packets, etc. >>>> >>>> If we let always possible to discover the presence of a proxy by the client... and the client realize that dropping/delaying packets is much higher >>>> when there is that proxy in between ... it is just a matter of the time and the market will decide >>> >>> The market is efficient only when there are numerous choices and the barrier to entry is low. That isn't true for things like this. The endpoints currently have no choice about whether or not some portion of the network is deploying what hardware/software, and often there is only once choice of vendor. The only real choices clients/servers have is what the bytes look like when they're sent. >> there are different markets involved here, >> from the client prospective you can choose different browsers or different ISP >> the IPS has also the possibility to change the vendor that provide hardware/software (i.e. there several proxies implementations around) >> >> >> >>> >>>>> >>>>> In any case, if you're doing the work of signing, why not just encrypt? >>>> because you can still use all the positive aspects of the proxy/cache >>> >>> You wouldn't be able to do that with a signed stream either without allowing for proxies to do arbitrary transformations, at which point we're back to where we are today in terms of reliability. These things are 100% competitive with each other-- you can't both require signed data and require not signed data! >> I am not saying it is easy, but it is something we can explore as an alternative >> >> I do think we should save and bring in 2.0 all the positive aspects of the proxy/cache >> >> >>> I'd rather see explicitly configured proxies for this kind of thing-- then the consumers are making the choice and can decide to not use it if it doesn't provide a benefit (if the providers block encrypted traffic then then only offer insecure traffic, which would not be tolerated in most non-completely-backwards jurisdictions and entities). >> >> I also think we should start to work on how explicitly configure a proxy, trusted proxy, and how to make possible to discover a proxy etc. etc. >> >> >> /Salvatore >>> >>> -=R >>> >>>> >>>> >>>> /Salvatore >>>> >>>> >>>>> >>>>> -=R >
Received on Monday, 26 August 2013 18:47:42 UTC