W3C home > Mailing lists > Public > public-xg-mashssl@w3.org > January 2010

Re: Updated: Biweekly MashSSL call

From: Ravi Ganesan <ravi@findravi.com>
Date: Thu, 14 Jan 2010 09:27:41 -0800
Message-ID: <3561bdcc1001140927r2223a1faj44800b105a8feebb@mail.gmail.com>
To: "Bajaj, Siddharth" <SBajaj@verisign.com>
Cc: "McClure, Allan H." <amcclure@mitre.org>, Thomas Hardjono <hardjono@mit.edu>, ben@digicert.com, "Shan, Jeff" <JShan@etrade.com>, gpercivall@opengeospatial.org, rsingh@opengeospatial.org, public-xg-mashssl@w3.org
Hi All, Per Siddharth's request on our last call, for IP hygiene I am
posting the use cases I mentioned onto this list (see below) so they
are bound by W3C patent rules.   I cut and pasted it from Section 2.0
of https://www.safemashups.com/downloads/MashSSL_Towards_Multi_Party_Trust_in_the_Internet.pdf.



2.0 Motivating the Problem In this Section we first consider examples
of seemingly unrelated scenarios in cross domain authorization,
enterprise mashups, delegated authorization (OAuth) and identity
federation and show how they all face a common problem. We spend a
considerable amount of time in this section because we believe it is
critical to understanding the implications of our work in MashSSL. 2.1
Example 1: Alice wants to see a classic movie clip Consider the
following simple problem: Classic_Movie_Clips.com operates a service
by which it provides streaming video of rare movies. Its business
model is not consumer direct. Rather, it forms alliances with popular
movie sites such as See_Movies_Here.com. When a user, Alice, logs onto
See_Movies_Here.com and clicks on a link that is actually served by
Classic_Movie_Clips.com, her browser makes a request to the latter
site. How does Classic_Movie_Clips.com decide whether to grant Alice
access? Here are some options. Classic_Movie_Clips.com:
 Could maintain an Access Control List (ACL) of all partner sites,
check the ORIGIN header in the request from Alice's browser, and if
the ORIGIN header's site is present in the ACL it could return that
value as an ACL to the browser. The browser will then examine if the
ACL matches the ORIGIN and allow access if it does so.
 Could be forced into the business of maintaining user identity and
authenticating the user directly or through some federation protocol.
 Could implement some proprietary cryptographic handshake protocol
with its partners, which ensures Alice arrives at the site requesting
a service with some “ticket”.

These are all suboptimal options. The first, which is roughly speaking
the W3C specification for cross domain authorization, can be defeated
by Connie the Content Thief who simply spoofs the ORIGIN header to one
she thinks will work. (Obviously the specification assumes some other
mechanism will protect the web application from a malicious user; and
the mechanism is mainly to protect legitimate users from malicious web
servers). The second option is even worse in that it is forcing a
business that does not want to maintain user information to collect
more user information from a user who may not want to reveal
information to more web services. More subtly, even if there was a
reason why Classic_Movie_Clips.com needed user identity, it is simply
bad practice to use user authentication for business to business
authentication. We will return to this point later. The third option
technically works but would rely on using a protocol that may not have
been standardized and has not stood the test of time, and will require
yet another set of credentials to be managed.
In this example the site being visited needs to somehow “look behind”
the user’s browser and assure itself with certainty of the origin of
the request. This problem has received a lot of attention lately2, and
we invite the interested reader to peruse the W3C proposal for cross
domain access, the Microsoft XDR work, etc. As a quick disclaimer, the
intent of this work is not to propose a different approach to this
problem, rather to demonstrate a way in which the W3C proposal can be
implemented efficiently and securely. 2.2 Example 2: Mashups cause Joe
the CSO to lose sleep Mashups are an emerging class of web
applications that are expected to gain widespread popularity. Loosely
speaking, a mashup is an application that aggregates data and logic
from different applications, often hosted by different service
In the “enterprise context” (as opposed to the “consumer context”) the
fundamental allure of mashups is the promise of much faster deployment
of business functionality by the IT Department. Or, better still
2 The Same Origin Policy implemented in current browsers would prevent
such cross domain requests. The latest versions of most browsers now
allow such functionality using the method described as the first
MashSSL: Towards Multi-Party Trust in the Internet Page 4
© Ravi Ganesan 2008-2009
(from the business point of view), the ability to deploy new
functionality without any involvement of the IT Department.
So why is Joe the CSO troubled? While the business unit sees rapid
deployment of new functionality, and the IT Department sees cool new
technology, Joe the CSO sees the mashup on the enterprise desktop as:
 An unanticipated entry point for external sources into internal
applications from a desktop inside the corporate perimeter.
 An unanticipated exit point for sensitive corporate information.
 A new way to compromise the desktop.
There have been several suggestions for reducing these risks. Almost
all these suggestions fall into the category of “let us sandbox
data/code from the external world and/or limit the harm it can do”.
For instance, the OpenAJAX hub specification was recently augmented by
a technology from IBM called SMash which polices inter widget
communication. The problems and the proposed resolutions have much in
common with trying to keep the desktop secure from Trojans, viruses
and other malware, with the browser playing the role of the Operating
System. In fact a research project from Microsoft, MashupOS,
explicitly takes this stance.
MashSSL: Towards Multi-Party Trust in the Internet Page 5

There are two limitations to this general approach:
1. There is an obvious trade-off between functionality and security.
Limit what the widget can do too much and you will lose functionality,
and on the other hand, if an external widget has too many privileges
then there is little security. This is a classic security problem.
Ideally the enterprise would like to thread the needle by getting fine
grained control of policy enforcement in a dynamic and context
sensitive fashion. Observe that in the OpenAJAX architecture, the
enterprise never gets to “see”3 the widget the hub has downloaded, and
Joe the CSO need not be a control freak to feel some anxiety.
2. A second broader limitation is that it is very hard to ‘keep up’
with what malware can do. After decades of fighting the good fight
against malware it is generally agreed that the anti-malware software
on your desktop has severe limitations. This paper from Google
Research for example makes rather dark reading. It has reached the
point where in this instructive article, Mark Bregman, CTO of
Symantec, argues that “whitelists” may become essential and that
perhaps we will move to ‘reputation based systems’. Observe that in
the context of widgets running within a browser mashup the code is
likely to be dynamic Javascript and it may be difficult to collect
‘signatures’ of all allowed widgets. In either event, we believe that
being able to strongly authenticate the source from which the widget
is being downloaded allows the enterprise to build more trust.
The intent of our work is not to argue against solutions like SMash or
all the other approaches of constraining widgets. Rather, we will show
that all these solutions would benefit by being augmented with an
approach where the enterprise plays an active real time role in
assuring itself of where a widget is being downloaded from. To
1. The enterprise should have at least some control of interactions
between an enterprise desktop and an external source. We believe that
completely ceding enforcement of a policy to the browser is tantamount
to losing control. The enterprise needs to be able to police what gets
executed where.
1. The enterprise should be able to “look behind the browser” to see
where the browser is downloading widgets from and only allow such an
action to proceed from a trusted source. This list of trusted sources
will take the place of whitelists. While ‘signatures’ of every widget
out there may be hard to obtain, we can at least ascertain the
antecedents of the widget. Joe the CSO may have more faith in widgets
coming from trusted business partners, than from other sites his
enterprise user wants to mash with.
2. And optionally, while a widget might ‘run’ at the user’s desktop to
take advantage of local computing power, perhaps Joe the CSO can sleep
better if he has the option to inspect the widget before it gets
Observe that in our example of Alice and her movies, the problem was
that the web application Alice was “going to” needed to look behind
her browser to see where she was “coming from”. In this example the
enterprise Alice is “coming from” needs to know where she is “going
3 An alternate mashup strategy is to do all the mashing on a server
and use the user’s browser for display only. This certainly provides
more opportunity for control, but is bucking the general trend towards
using the idle power on the desktop to good effect. We believe that
there will always be room for both models and that with high
likelihood, sophisticated mashups will use both strategies
opportunistically. A security architecture for mashups cannot assume
one model or the other, and must support both. In our next example we
will consider server side mashups.
MashSSL: Towards Multi-Party Trust in the Internet Page 6

2.3 Example 3: Alice wants to go on vacation
Alice wants to book a budget vacation, and rather than make her own
reservations at her preferred travel providers FlyAir, StayInn and
DriveCar, she relies on BudgetTravel to watch over prices twenty four
hours a day and to act when the prices are low. Now since she cannot
be on-line waiting all the time, Alice needs to allow BudgetTravel to
act on her behalf, but she does not want to give up her credentials to
FlyAir, StayInn and DriveCar.
Luckily for her all parties have recently adopted the OAuth delegated
authentication protocol which was designed to solve just this problem.
When she sets up the delegated authentication, BudgetTravel redirects
her browser to FlyAir where she is asked to authenticate and give
permission to FlyAir to divulge her data to BudgetTravel. She repeats
this process with StayInn and DriveCar. Once Alice has finished the
delegation, BudgetTravel can access each of the providers directly. We
make the following observations on the OAuth protocol:
 As part of this process BudgetTravel needs to have obtained OAuth
credentials from FlyAir, StayInn, DriveCar, and the hundreds of other
such travel providers. And each travel provider has to provide an
OAuth credential to each of the travel agencies it deals with.
 The above credentialing headache is particularly ironic since OAuth
recommends that the parties use SSL for transport level security,
which implies that each of these parties already has a digital
credential (the SSL certificate) that the other party explicitly or
implicitly trusts.
 Let us assume that BudgetTravel and FlyAir are both very popular. It
is likely that they are performing a similar ‘mashup’ for many users.
As currently specified, BudgetTravel would have to re-authenticate
itself to FlyAir each time, and FlyAir would have to verify the
credentials for each instance of the mashup. If the public-key version
of OAuth authentication is used, the performance overhead could be
Let us return to our theme of “looking behind the browser”. OAuth
recommends that Alice’s session with BudgetTravel be protected with
SSL at the transport level, so Alice’s browser has authenticated
BudgetTravel. Yet, this assertion cannot be communicated by Alice to
FlyAir (remember that Alice

cannot be trusted), because FlyAir cannot “look behind her browser”.
The identical problem is true in reverse; even though Alice’s session
with FlyAir is protected by transport level SSL, and her browser has
authenticated FlyAir, this assertion cannot be securely passed to
BudgetTravel. The intent of our work is not to argue that OAuth be
replaced. Rather, we simply show how it can be augmented to remedy the
above deficiencies. We will do this with no change to the protocol
itself, and will simply either tunnel OAuth through MashSSL, or use
the “additional parameters” already specified in most OAuth messages.
Let us turn to one final example of “browser opacity” and the problems
it causes.
2.4 Example 4: Alice wants to use her OpenID
Almost all identity federation protocols (OpenID, SAML, 3-D Secure,
etc.) have at their heart the exchange shown in this Figure.
The protocols typically at a minimum define the second and fourth
steps; the request for authentication and the response.
All these protocols are vulnerable to various types of phishing
attacks precipitated by an active or passive man-in-the-middle (MITM)
attack. Four scenarios for such attacks are shown in the next figure:
 In the first (clockwise from left top) the attacker simply tricks
(e.g. by faking both the RP and IP sites) the user into giving up
their credentials. A key observation is that federation protocols like
OpenID (similar comments apply to OAuth) tend to make it more likely
that users will become accustomed to being asked for their IP
credentials often, making this attack more likely.
 The second attack is quite similar to the first, except that it
plays out in real time.
 In the third attack, the MITM is between the user and the IP.
 And, in the last attack, the MITM somehow gets between the user and the RP.
The first two attacks can and should be prevented by making it very
difficult for a user to give away their credential. This could take
the form of a credential that cannot be given away, or by gradually
retraining everyone to only type in their passwords into a protected
area in the browser. The last two MITM scenarios are directly caused
by the “cannot look behind the browser” problem. A legitimate user has
SSL protected communications with the RP and with the IP, and so the
browser “knows” who is at each end. Yet, the IP and RP are not able to
“look behind the browser” in a secure fashion (remembering again, that
the user cannot be trusted) and verify the identity of the server.
Each of these protocols has to go through some contortions to ensure
that these MITM cannot sneak in.
MashSSL: Towards Multi-Party Trust in the Internet Page 9
© Ravi Ganesan 2008-2009
We reiterate that we are not going to propose changes to OpenID or
SAML. Rather we will simply show if they had a secure way to look
through the browser their efforts could be simplified and made more
2.5 The General Recurrent Problem and Solution Requirements
We hope we have been successful in motivating that the current
Internet security fabric has a fundamental problem. When Alice is at
her browser and communicating with two web applications it is very
often important for Web App1 to be sure that it is Web App2 at the
other end, and vice versa.
The examples we showed are only a few of the many situations in which
this problem manifests itself.
Let us now enumerate our perspective on the requirements any solution
to this problem must meet. We believe that the following requirements
are critical to the solution:
1. Solve it once! We believe that we should solve this problem once at
the fabric level, and not in a piecemeal fashion for the various
situations in which the absence of a fundamental solution manifests as
a problem. This is not a cross-domain XHR, OpenAJAX, OAuth, OpenID,
SAML, etc. problem, it is a fundamental Internet security component
that is missing in action!
2. Solve it at the application level. It will be tempting to find a
way to stitch together the two SSL pipes shown in the above figure at
the socket level, and in fact the solution we have devised can work at
the socket level. However, as applications migrate into clouds where
their ‘server
address’ will become increasingly irrelevant, we feel the problem
should be solved at the application level.
3. Do not trust the browser or the user, or require changes to the
browser. It might seem that this is fairly obvious that Web App 2
cannot take the word of the browser on which application directed it
to Web App 2. We hope we illustrated this in Example 1. We do not
claim this is always avoidable, but minimizing the trust in the user
is essential. Further, solutions which require changes to the browser
we believe should be avoided as they cannot be used in the near term.
The solution has to support existing browsers.
4. Don’t use user authentication as a proxy for server authentication.
Let us say a man walks into a business, claims he is a lawyer, and
shows the business a power of attorney purportedly signed by you. The
business will verify your signature of course, but even if it is
satisfied you signed it, it is almost inconceivable that the business
will allow the man to transact business on your behalf without
authenticating him. Take another example: you order your bank to move
a certain amount of money to your brokerage. Sure the bank will
authenticate you, but when the money actually moves, the bank and the
brokerage will perform B2B authentication (or use a trusted 3rd
Party). They will certainly not take your word for who is at the other
end of the connection. In the brick and mortar world, almost every
business-to-business transaction implicitly or explicitly requires
mutual authentication of the businesses, not just the client on behalf
of whom the transaction happens. There is little reason to believe
that if the transactions are conducted electronically, that end-user
authentication (using an easily compromised user ID and password!)
will substitute for businesses verifying the identity of the
counterparty business. Early emerging mashups tend to fall into this
trap and often suggest substituting user-to-business authentication,
for business-to-business authentication. We believe they do so because
of the absence of this “missing security function”. Rather, than “make
do” in this fashion, we believe we need to fix the underlying problem.
5. No new crypto protocols! We believe that the only good crypto
protocol is an old crypto protocol that has stood the test of time.
While it is easy to invent a new protocol to solve this problem, it is
far better to solve it with a protocol that everyone knows and trusts.
For instance, one can imagine starting with XML Digital Signature and
XML Encryption and building a protocol that solves our problem4 , but
it will take years before we can build confidence in such a protocol.
This is simply a function of how cryptography works; there were
significant issues addressed in SSL several years after it was in
widespread use.
6. No new trust infrastructures! Perhaps the hardest part of any
solution would be to distribute trustworthy credentials to
applications. This is a problem of business processes, trust,
insurance, indemnity, etc., and it is infinitely preferable to use an
existing infrastructure.
7. Plan for scale. Finally, we believe that the solution should
anticipate that in many practical situations the same two web apps
will need to repeatedly authenticate each other, but through different
users, and have its performance optimized for that situation.
Before turning to our solution, we first take a closer look at the SSL protocol.
Received on Thursday, 14 January 2010 17:28:18 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:37:36 UTC