Re: [AC] Helping server admins not making mistakes

Thomas Roessler wrote:
> On 2008-06-10 16:51:29 -0700, Jonas Sicking wrote:
> 
>>>  - There needs to be a security consideration about Range, and not
>>>    getting fooled by it -- since, frankly, the implementation you
>>>    describe above is flawed.
> 
>> Which implementation is flawed?
> 
> An implementation of access-control that lets itself be fooled by
> Range headers.

It's entirely possible that this won't be a problem in most
implementations, though I think it have been in the one that I have so far.

However my point was that it is not far-fetched that there are headers
that will affect the response in such a way that it affects the security
for an Access-Control request.

>>>  - Can we get rid of the processing instruction already?
> 
>> I would actually be ok with that. But I know others have
>> expressed a strong opinion to keep it.
> 
>>>  - Wouldn't be a problem if we went for a model in which the amount
>>>    of flexibility in the client is minimal, effectively forcing
>>>    developers to put the enforcement into the server.
> 
>> How is that?
> 
> Because (a) there wouldn't be a processing instruction, and (b)
> enforcement in that model would seem to be less susceptible to weird
> HTTP behavior.

That assumes that servers are going to be better at enforcing the
security model than the browsers. That is not something I'm entirely
convinced of. Especially given that the number of servers configurations
is much greater than the number of client configurations out there.

> In the current model, you need to make sure that server-side
> implementations don't mess with headers in an unpredictable way; the
> current approach is to attempt profiling the *requests* down to
> something predictable.
> 
> There's also protecting the content of XML resources from being
> tampered with by way of weird request construction.
> 
> Contrast that with a model in which you transmit relatively minimal
> information in the HTTP request ("this is a cross-site request from
> X", or "can you deal with cross-site requests" for the pre-flight),
> and similarly minimal information in the repsonse ("ok to show to
> the origin that you told me about").  In this model, the security
> critical procesing is limited to evaluating one request header;

It makes it simpler for the parties to decide on "is it safe to make
this request and/or show this data" yes. It doesn't make it simpler for
the server to deal with the actual request that is coming in after the
preflight though.

I think the model we have for GET requests and for the pre-flight 
request is really simple and (modulo cookies) safe. I'm not really 
worried that server operators are going to get those things wrong. What 
I am concerned about is server operators dealing with the non-safe 
requests after the pre-flight so those are the ones I want to sure up.

> the
> failure modes for strange HTTP responses are limited to data not
> being accessed -- i.e., the system fails safely.

How is that? If you fail due to someone sending you headers or methods
you hadn't expected, couldn't the behavior go either way?

>>>> However we could obviously not apply the same fix if other
>>>> custom headers have the same problem.
> 
>>> Indeed.  But maybe the fix here is really to drop the processing
>>> instruction and rely on headers only.
> 
>> Right, but that would still only solve some of the dangerous
>> server features I described. It wouldn't solve a header that
>> stitches two resources together, or that allowed insertion of a
>> custom header in the reply.
> 
> See above about the custom header; that kind of problem is inherent
> to the current design.

It's not inherent, my proposal greatly reduces the risk of unexpected 
headers. While increasing the complexity for the client, I agree.

> In any event, there is no reason to believe that these effects are
> limited to headers, and can't be generated by using exotic HTTP
> methods, or using exotic query parameters that some whacky
> server-side framework evaluates in an interesting way.  So it
> strikes me that the vulnerabilities that you are defending against
> mostly relate to server software that exhibits exotic behavior in
> constructing HTTP responses.

My proposal addresses both headers and methods, no? See the original 
mail in this thread.

It doesn't address query parameters I agree, but it seems much less 
likely that the server will react to a query parameter directly rather 
than pass it on to the CGI script. I've never heard of such a thing.

But like I said in a recent mail. This is all about reducing the risk of 
bad configurations leading to exploits. There is no way we can fully 
prevent mistakes on the server side, all we can do is reduce the 
probability.

> If you're really worried about these kinds of attacks, then go for a
> model (like what I described above) in which the enforcement happens
> on the server side (possibly even within a web application firewall,
> before hitting the server proper), and don't make the protection
> depend on the HTTP response.

I don't understand this argument at all. My concern is that the server 
won't be properly handling some requests and you seem to suggest that 
the remedy is sending more requests to the server?

/ Jonas

Received on Thursday, 12 June 2008 02:18:15 UTC