Re: Trusting proxies (was Re: I revised the pro/contra document)

On 26/11/2013 1:55 p.m., Roberto Peon wrote:
> Here is the GOALS section from:
> I do think breaking down the conversation in this way is interesting.
> 6.2 <>.
>  Goals
>    These are the goals of a solution aimed at making proxying explicit
>    in HTTP.
>    o  In the presence of a proxy, users' communications SHOULD at least
>       use a channel that is point-to-point encrypted.
>    o  Users MUST be able to opt-out of communicating sensitive
>       information over a channel which is not end-to-end private.

I think this is partially wrong.

It would be far better to give the client some guarantee of end-to-end
confidentiality and/or non-transformation before it opts-in to sending
private details.
Signing or encrypting the particular details using a shared secret
arranged via mandatory out-of-band means with the origin server would be

>    o  Content-providers MAY serve certain content only in an end-to-end
>       confidential fashion.

This seems to be a waste of words in the spec. Content providers will
make up their own decision about this. No need to use MAY, or even to
state that IMHO.
 Better to focus on the mechanisms provided by this spec for them to
make that decision with.

>    o  Interception proxies MUST be precluded from intercepting secure
>       communications between the user and the content-provider.

This is a straw-man clause. Interception proxies will do what they want.
The only way to avoid that is to provide better features through use of
HTTP without interception.

>    o  User-agents and servers MUST know when a transforming proxy is
>       interposed in the communications channel.

Via header is already mandatory. This has not gone down to well so far.
Any similar replacement must jump the same hurdles which killed Via as a
useful extension point for this ability.

It reveals details of the path an protocol compatibility in both directions.

The alternative is the Forwarded-For header. Which sadly is specified as
a one-way Request header with a MUST NOT clause on use in responses.
With stated grounds that it reveals details of the providers network
which must be kept secret.

** Completely ignoring the privileged details revealed about the clients
ISP being published to the origin. **

Eliminating the client or their ISP from access to path/chain details
necessary to form trust decisions about the origin is a terribly bad way
to go about engendering a trust relationship.

>    o  User-agents MUST be able to detect when content requested with an
>       https scheme has been modified by any intermediate entity.

I don't see any reason to flag up https:// about this. For even a modest
amount of security the requirement also goes for http:// and other
schemes as well.

>    o  Entities other than the server or user-agent SHOULD still be able
>       to provide caching services.
>    o  User agents MUST be able to verify that the content is in fact
>       served by the content provider.

This is a vague cluause whih is either a straw-man or invalid depending
on how you look at it.

Firstly, does HTTP actually make that guarantee anywhere today? I dont
think even HTTPS does that.

Secondly, what use is knowing the content is served by the content
provider when it is not necessarily the content requested?
 - the purpose of transforming proxies is to *generate* new or updated
content based on the content providers content. Think ESI protocol or
FTP->HTTP gateways.

Thirdly, it directly violates the SHOULD on caching goal above. Content
served out of a cache is by definition *not* served by the content provider.

>    o  A set of signaling semantics should exist which allows the
>       content-provider and the user to have the appropriate level of
>       security or privacy signaled per resource.

See above comments about Via and Forwarded-For.

>    o  Ideally, HTTP URI semantics SHOULD NOT change, but if it does, it
>       must remain backwards-compatible.
>    o  Configuration and deployment of proxies should be as easy as
>       currently used solutions.
>    o  Introduction of explicit proxying MUST NOT require a flag day
>       upgrade - in other words, it should work with existing client and
>       content provider implementations during the transition.

> On Mon, Nov 25, 2013 at 4:43 PM, James M Snell <> wrote:
>> On Mon, Nov 25, 2013 at 4:10 PM, Martin Thomson
>> <> wrote:
>>> On 25 November 2013 15:53, David Morris <> wrote:
>>>> Powers need to be negotiated and not an absolute feature of the
>> protocol.
>>> That's a nice blanket statement.  Let's assume that this is true for
>>> all combinations of powers (a point that seems suspect); who are the
>>> parties at the negotiation table?
>> Great question that does not have a great answer. Part of the problem
>> with this conversation is that we don't really have a great vocabulary
>> developed yet to really discuss it.. we just keep saying "trusted
>> proxy" and "untrusted proxy" without really breaking down what those
>> really are. We need to if we're going to make any progress in this
>> discussion. Also, without any clear shared notion about what kind of
>> good behaviors a "trusted" intermediary ought to implement, it's going
>> to be very difficult to really nail this down.
>> So let's take a first stab at this:
>> 1. A Trusted Intermediary exists in the path for the benefit of either
>> the requesting agent, responding origin, or both.
>> 2. A Trusted Intermediary ALWAYS makes it's presence on the path known
>> to both the requesting agent and the origin.
>> 3. A Trusted Intermediary ALWAYS ensures that any modification it
>> makes to either the request or response are detectable by the
>> receiving peer.

How would you suggest signalling things like updating the
Cache-Control:max-age=N value? adding only-if-cached etc?
 or appending to the If-Match/If-None-Match  headers?
 or stripping Proxy-Connection and similar garbage headers?

These are all things Squid legitimately does to optimize the traffic.

... not to mention the problem of people stripping any header used to
signal these details simply because it is not known to them or appears
risky in some small way.

Micro-managing these details gets very difficult and can be verbose in
the traffic. Abstrac too far and you end up with an evil-bit.

>> 4. A Trusted Intermediary NEVER utilizes request or response data in a
>> manner not authorized by the requesting agent or responding origin.

"Says who"?

Seriously. This is not something that can be specified. Instead focus on
the mechanisms which make it impossible or impracticable to be bothered

>> 5. A Trusted Intermediary that exists for the benefit of the
>> requesting agent ALWAYS provides proof to the responding origin that
>> it has been authorized and trusted by the requesting agent.
>> 6. A Trusted Intermediary that exists for the benefit of the
>> responding agent ALWAYS provides proof to the requesting agent that it
>> has been authorized and trusted by the responding origin.
>> 7. A Trusted Intermediary NEVER attempts to subvert or compromise the
>> integrity communication between the requesting agent and responding
>> origin.
>> 8. A Trusted Intermediary ALWAYS limits it's actions to those
>> explicitly granted to it by the requesting agent or responding origin
>> or both.
>> 9. A Trusted Intermediary ALWAYS asks for permission before it
>> performs any action (see #2)
>> I'm sure these could use some massaging and refinement, but what this
>> basically describes in a delegation model: A trusted intermediary is
>> one that has been delegated some form of verifiable permission to act
>> by either the origin or the agent. The key questions, then, become how
>> exactly do we reliably enable this kind of delegated authorization
>> model.
>> Is breaking the conversation down this way helpful?


Received on Tuesday, 26 November 2013 06:08:14 UTC