Re: Firewall sample application and requirements

On Tue, May 28, 2002 at 12:38:48PM -0700, David Orchard wrote:
> Why do firewalls break with SOAP being used as an application protocol?  My
> guess is that you will say because the method name is inside the message.

Assuming the method is in the SOAP envelope, yes.  But it doesn't
have to be, when used with an application protocol which already has
its own methods.

> And that parsing the message to get the method name does not scale.

I'm not concerned about that.

> Sample Application: A StockQuote service has 2 different methods,
> getStockQuote for retrieving a quote and a setStockQuote for setting the
> price.  There are different security ACLs for each of the methods.
> 
> Requirements: An intermediary, typically a security intermediary such as a
> firewall, should be able to determine the whether to allow or deny access
> based upon the methods message in a timely manner.  Quote updates must have
> higher priority than gets, as the information must be as current as possible
> according to SLAs.

Sounds good.

> Design:
> 
> 1. Using a SOAP HTTP POST binding to a single URI for both of these, a
> security intermediary must scan the SOAP message to find the first child of
> the body in order to determine which ACL to apply.  The time to scan thus
> varies linearly with the number of headers.  In typical applications
> however, there will be few headers.  The admin of the security intermediary
> must know which SOAP method to configure the ACLs for.

> 2. Using HTTP GET and HTTP PUT for each of these, a security intermediary
> can use the HTTP method to determine which ACL is applicable.  The admin
> must know which HTTP method to configure the ACLs for.
>
> 3. Two different URIs are used with SOAP HTTP POST.  The security
> intermediary applies the different ACLs to the different URIs, rather than
> URI/method tuples.  The admin applies the ACL to any and all operations at
> the URI.  We would probably partition the application in different clusters
> for the prioritization.  Further, we'd probably do some kind of session
> affinity for the "writes", as there is probably only one "writer" though
> many readers.
> 
> In all cases, the intermediary scans the message for the security
> credentials.
> 
> Benefits of each design:
> 
> 1. The service could be deployed to additional protocols and the admin does
> not have to know any additional method information, ie use the same
> method/ACL binding for SMTP.

I consider that a bug, not a feature, for the reasons given below about
firewalls.  Please explain to me why this is desirable, rather than a
security nightmare;

http://www.xmlhack.com/read.php?item=1541

> Easier deployment - there's only 1 resource
> that is deployed.

General problems with the design of #1;

- firewalls don't know what SOAP is
- firewall administrators don't want to have to understand a limitless
number of methods
- unable to reuse HTTP caching on a GET-like operation because it's
tunneled over POST
- assuming the stock quote isn't also available over GET, impossible
to integrate with much of the Web
- a client has to have special knowledge about how to deal with this
resource; that some "magic information" on a POST returns what would
normally be returned over GET.  The normal HTTP application layer
contract is not in effect.

> 2. Scanning time to find method name.  Easier deployment - there's only 1
> resource that is deployed.
> 
> 3. Handles node scalability better as it's easier to cluster services by URI
> than by method on a URI.

URIs have nothing to do with scalability.  Some believe they do, because
they believe that URIs are locators, rather than identifiers.  This is a
common fallacy, as systems like Akamai demonstrate.

>  Also simpler for the admin to do security.  This
> is also standard web practice today - higher security messages are typically
> done using SSL on a completely different URI space and set of servers.
> Further, because the method does not have to be determined, this is the
> highest performant security model.

I'm not sure what you're saying here.  SSL, depending how you use it,
merely authenticates the server and encrypts the traffic.  It doesn't in
any way change the need for method and/or URI level ACLs.

>  Finally, this model is more consistent
> with current security firewalls which are typically at the URI level.  Most
> web firewalls have a default mode that lumps GET and POST together into the
> same ACL.  Firewalls do typically offer HTTP method distinction, but it's
> not used as often as URI/any method style.

Ok.

Finally for #3, the same problems exist as for #1.

> The choice of application design is dependent upon the trade-offs between
> Service RAS (what #3 optimizes for), run-time performance (#2) and ease of
> deployment across multiple protocols(#1).

There's all sorts of things you can try to optimize for if you break
the fundamental architectural principles of the system you're using.
I've tried to highlight some of the costs of doing this.  In the context
of the Web, the biggest (which are *huge*) are; you're fooling
firewalls, you're making it impossible to communicate a priori (and
while ignoring the existing contract already in place over the wire),
and you don't integrate well with the rest of the Web.

> Summary:
> This sample application, requirements and designs provide no compelling
> evidence that SOAP as an application protocol is broken.

SOAP is not an application protocol.  But it is also not broken.  What's
broken is how it is being used.

>  While I'm
> reluctant to bring Roy's thesis into the discussion, his thesis does
> specifically talk about which functions REST is optimized for.  And that's
> the level of what our discussions should be at.  Maybe web services
> applications require different optimizations that browser applications.

Or maybe not.  Are you saying they do?

As Web services are currently implemented, I agree with you, they do,
because interactions are more fine grained than with hypertext (method
invocations rather than data transfer).

More on this point below.

> Further, this example shows the notion and design of resources and
> representations of resources are application specific.

I don't know what this means.

>  The web certainly
> optimized for certain types of resources (html pages, images) and
> applications (browsers).  The optimizations may not be appropriate for
> application to application communication such as stock quotes.

Web infrastructure isn't optimized for HTML pages and images.  It's
optimized for GET.  GET can return all kinds of stuff, that existing
intermediaries can cache even if they've never seen those types before.
Even XML.

The optimizations are appropriate when you choose to use hypertext.
If you choose not to use hypertext, it should be no surprise that the
optimizations for it don't look very appealing.

The question is, why do you believe that hypertext is inappropriate for
Web services?  You've repeatedly stated (as above) that you seem to feel
that hypertext is somehow only good for browsers.  This is not true.

Your scenario #2 uses hypertext.  It doesn't fool firewalls.  It
integrates well with the Web.  Any benefit you might be able to squeeze
out from a non-hypertext scenario, pales in comparison to the value of
being part of the Web (especially when this work is happening in the
*World Wide Web* Consortium!!!).

MB
-- 
Mark Baker, CTO, Idokorro Mobile (formerly Planetfred)
Ottawa, Ontario, CANADA.               distobj@acm.org
http://www.markbaker.ca        http://www.idokorro.com

Received on Tuesday, 28 May 2002 22:48:17 UTC