W3C home > Mailing lists > Public > public-bpwg@w3.org > January 2009

RE: ACTION-893: Start putting together a set of guidelines that could help address the security issues triggered by links rewriting.

From: Rotan Hanrahan <rotan.hanrahan@mobileaware.com>
Date: Sat, 17 Jan 2009 14:46:39 +0000
To: Tom Hume <Tom.Hume@futureplatforms.com>, Mobile Web Best Practices Working Group WG <public-bpwg@w3.org>
Message-ID: <90AC7BC6-C4CB-4535-AA4B-977508451CAF@mimectl>
Tom said:

> I'd also venture that if this discussion is around transcoding of 
> existing web services, and particularly long-tail web services, then 
> any solution which implies providers of web services have to do some 
> work (e.g. by detecting Via headers and responding with 406 codes, 
> which I think has been suggested previously) isn't appropriate IMHO.

I think perhaps we might need to consider the Web in two forms, and suggest different strategies appropriately: those sites that are legacy (or wish to appear as legacy) that can (or will) make *no* attempt to give guidance to intermediaries, and those sites that are willing and able to provide such guidance.

In particular, the absense of specific headers (e.g. the "no transform" of which we have heard much) cannot be assumed to be because the site is proactively giving permission for transformation. It's just a legacy site, it never anticipated the need to do so, and it therefore isn't doing so.

I wonder if there might be a mechanism by which a CT intermediary would be able to distinguish between a legacy site (either inherently old, or merely using outdated technology) and a more advanced site. If a CT were to observe the absence of no-transform on a more advanced site, it would be reasonable to conclude that the site is giving permission. Conversely, the absence of no-transform on an assumed legacy site could suggest to the CT that it should apply heuristics appropriate to legacy sites.

In effect, there could be an unavoidable two-tier Web: pre-CT and post-CT.

So, as Tom says, requiring providers to "do some work" would not be appropriate for legacy sites, but you could give such guidance if you had already found a way to identify more advanced sites, such as those that are mobile aware (*) and thus in a better position to either do adaptation themselves or offer guidance to intermediaries. In effect, the best practice could be stated as: "if your site is already tailored for mobile, or intends to encourage mobile access via intermediate CT, then take proactive steps to make this known using 'some-magic'."

If, for example, I knew that CT universally adopted a specific technique (e.g. a custom header in every request) that could be used to declare "I am an adaptive site and will include additional headers as needed to give guidance to intermediaries" then you can be certain that the next update of our products would have that behaviour. Though first of all, it would make life a lot easier if the User Agent header was not being faked, though I admit that if the CTs were to agree a universal means of conveying the original UA/Accepts data in their requests, then adaptive sites could at least work in partnership with the CTs rather than being at odds with them.

It's just a thought.

---Rotan.

(*) You can see how my company got its name...




From: Tom Hume
Sent: Sat 17/01/2009 11:19
To: Mobile Web Best Practices Working Group WG
Subject: Re: ACTION-893: Start putting together a set of guidelines that could help address the security issues triggered by links rewriting.


Rob

Your points re operational procedures are true... but don't they  
nevertheless introduce risk to content providers wishing to ensure  
security? If I set up a secure service that isn't transcoded, I know  
that communications between client and server are secure. If it's  
transcoded (which it may be by a large number of operators worldwide)  
I'm suddenly dependent on the operational procedures and integrity of  
all these operators. I'd be surprised if I, as a content provider, was  
able to evaluate these.

A similar point exists re software audit; are we expecting or  
mandating transcoder deployments to have gone through such an audit,  
and publish its details publicly? Unless such an audit is visible it's  
probably of little comfort.

(5) seems to be getting into the business of specifying the internal  
operations of transcoders, something we've so far shied away from  
doing (though I can see that this issue may be serious enough to start  
justifying this).

(6) seems similar, and (rightly) introduces a load of new  
responsibilities for transcoder deployments. If we're to use this as  
the basis for meaningful, testable guidelines (as the rest of the CT  
doc) then we'll need to get very specific on the details of what  
managing browser sessions entails - particularly given the fact that  
some sites deliberately share cookies, whilst others mustn't. For  
instance, as fred.futureplatforms.com I might set a cookie  
for .futureplatforms.com and expect ginger.futureplatforms.com to use  
it. So it's not enough to present only cookies originally set by the  
origin server - there's some more logic needed in the proxy for this.  
Equally cookies presented between client and proxy shouldn't go any  
further, I suspect...

Are there other assumptions that web applications typically make  
around the name of the origin server, I wonder?

There are other issues here (which I think have been raised before by  
Luca) around non-repudiation - that CPs may have sound reasons to rely  
on users not being able to claim "I didn't do it" later on (on, say,  
auction sites) and any rewriting of links introduces doubt here.

I'd also venture that if this discussion is around transcoding of  
existing web services, and particularly long-tail web services, then  
any solution which implies providers of web services have to do some  
work (e.g. by detecting Via headers and responding with 406 codes,  
which I think has been suggested previously) isn't appropriate IMHO.

Tom

On 16 Jan 2009, at 20:20, Robert Finean wrote:

>
> http://www.w3.org/2005/MWI/BPWG/Group/track/actions/893
>
> This is a first draft, all comments welcome:
>
> ----
>
> When a CT-Proxy is a "man-in-the-middle" a high level of trust needs  
> to
> be established with the mobile network operator and end-user before a
> user chooses to allow transformation of their private data. Their
> concerns are:
>
> 1. A 3rd-party could see their secure details, even by accident.
> 2. Malicious software could snoop secure details or copy them.
> 3. Secure details could be recovered from a discarded faulty hard- 
> disk.
> 4. A system administrator could see secure details by logging into the
> CT-Proxy server, even by accident.
> 5. Secure details could be logged by the CT-Proxy's operator for
> business analysis.
> 6. Their secure details may in fact be going to a fraudulent website,
> not the website they expected (a phishing scam).
> 7. Their logged-in session with a website could be hijacked by someone
> spoofing their identity.
>
> 1 is addressed using encryption on all connections to/from the CT- 
> proxy
> and by ensuring that any caching at the CT-proxy complies with RFC2616
> and RFC2109 with respect to public/private caching rules.
>
> 2 is addressed through software audit.
>
> 3 & 4 are addressed by operations procedures and by encrypting all  
> user
> data on disks.
>
> 5 is addressed by never logging anything more than origin domain name
> for HTTPS transactions (ie only log what HTTP CONNECT would reveal).
>
> 6 is complicated by the fact that often a CT-Proxy has to operate as a
> gateway, when it ceases to be a "proxy" and becomes an "end point".
>
> For instance:
> * When a long web-page gets fragmented, links to subsequent fragments
> must target the CT-proxy as the origin server.
> * JavaScript events triggered by links in the device's static XHTML/MP
> markup must target the CT-proxy as the script execution environment.
> * HTTPS links must be rewritten to transcode an HTTPS web site.
> * To minimize the size of the page returned to the end user, long URIs
> may be replaced by short "tokens" that only the issuing CT-proxy can
> redeem.
>
> At the URI level, this means that the URI moves from:
>  http://[original-URI]
> ... to something like:
>  http://ct-proxy.example.com/[original-URI]
>
> This change of origin hostname is important because of the security
> implications it has on the browser for cookies (which belong to
> hostnames) and for script Document Object Model security (if the  
> device
> has any script capabilities then cross-site scripting attacks become
> possible). From the device browser's perspective the CT-proxy makes  
> the
> Web look as if it is all from one origin.
>
> The solution to this is for the CT-proxy to manage all cookies and all
> script execution on behalf of the device whenever the CT-Proxy is the
> URI end-point. The CT-proxy should not pass origin-server scripts
> through to the device for execution or pass origin-server cookies to  
> the
> device. The CT-proxy must manage its script execution security and
> cookie/hostname security in the same way as a web browser to prevent
> malicious cross-site scripting exploits. [RFC2109] [reference on DOM
> security?]
>
> The CT-proxy must manage the browsing session (including the change of
> referer, the use of client certificates, etc) on behalf of the end- 
> user.
>
>
> 7 is only a threat when the CT-proxy is managing the browsing  
> session on
> behalf of the user's device browser. In this case the CT-proxy needs  
> to
> uniquely identify requests from each user, with either out-of-band
> authentication using the radio network's SIM identity or by using
> cookies between the user's browser and the CT-proxy. [Reference on
> secure session management using cookies?]
>
>
>

--
Future Platforms Ltd
e: Tom.Hume@futureplatforms.com
t: +44 (0) 1273 819038
m: +44 (0) 7971 781422
company: www.futureplatforms.com
personal: tomhume.org
Received on Saturday, 17 January 2009 14:48:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:42:59 UTC