- From: Malcolm Rowe <malcolm-what@farside.org.uk>
- Date: Tue, 27 Jul 2004 11:57:14 +0100
Jim Ley writes: > Most programming toolkits aren't run in a browser! This _DOES_ make > Mozilla SOAP in secure, I have SOAP services running on this intranet > which are protected purely by the fact they're behind the firewall, > any machine here can access them, those outside cannot - However, > www.anydomain.org can use them easily, if someone happens to drop an > XML file in the root. Sorry, but I don't see a problem here. You have an internal webserver that contains important data, and accepts SOAP calls; let's call it important.example.com. That server doesn't require authentication, but that's ok, because you trust all your users. Your browser is capable of making scripted SOAP calls. It's also capable of downloading scripts from the internet. You don't want your browser to make calls on your behalf using 'untrusted' scripts, but it's ok for it to make calls using 'trusted' scripts. There's no way to determine 'trusted' from 'untrusted' automatically, for all organisations, since the degree of 'trust' may be different for each service and organisation. For example, internal-phonebook.example.com may trust any scripts downloaded from *.example.com, while important.example.com may not trust scripts downloaded from anywhere else. Particularly (for this example), it doesn't trust www.anydomain.com. Note that this issue of 'trust' is only really relevant in an intranet scenario, where you can trust your users; in the internet at large, a SOAP service has to contend with malicious clients, so the server (and not the client) must handle access control if necessary. Mozilla's current implementation (ignoring some unimportant details) means that scripts downloaded from important.example.com are the only ones that can access SOAP services on that server. If the server operator then decides that scripts at important-2.example.com can also be trusted, they can create a file at http://important.example.com/web-scripts-access.xml that contains a record for important-2.example.com. The server operator is the one who manages the data; they also get to set the policy. When any script attempts to make a SOAP call to important.example.com, the XML file will be downloaded by the client. If the file doesn't exist, or if the script's origin does not appear in the file (and the service isn't marked as public in the file), then the SOAP call is prohibited. [The design also allows for delegation, but I'm ignoring that for now.] Incidentally, if (for example) Google decided to allow any script to make SOAP calls to api.google.com, they could carry out the same procedure and mark the service as public. I don't understand what your objection is to this design. It allows the server operator to decide which scripts can be trusted to make SOAP calls to their service. The server operator, and not the user, is the one who gets to set the policy, and is also the person responsible for securing the data. The only feasible attack that I can see would be if www.anydomain.com could persuade the server operator to permit access from their domain, either explicitly, or by marking the service as public. However, the server operator effectively controls all the data on the server anyway (even in an ASP scenario), so I don't see that this makes that hole any larger. That would be like arguing that Apache's httpd.conf file allows the administrator to disable .htpasswd checks sitewide. Sure, it does, but why does that give you a realistic attack? Do you have a better idea that still permits public (or semi-public) SOAP services? After all, we don't restrict scripts from POSTing to different domains, which is, essentially, the same thing. Now, I do have *some* reservations about this design: * It pollutes the URI space, which should be completely under the control of the server operator. This is essentially the TAG issue that I originally mentioned. * It allows the server operator to permit access to all services held on their server (with no override possible). This assumes that the server operator is the stakeholder, which may not be the case. However, as mentioned above, the server operator *does* have effective control over all services running on their server, so this should not introduce any *new* attack vectors. I also have some issues with the Mozilla implementation: * It allows a preference to be set that can be used to mark any service as public. I'm sure that this is significantly faster in an intranet setup with mass-deployed clients, since it removes the requirement for all users to check the permissions file on the server. However, as an attacker, I now only need to control that pref value on one intranet client machine in order to be able to access any intranet service. * As you pointed out, it doesn't appear to be possible to disable this model and revert to just using the same-origin check. I would have thought that it would be prudent to allow this, if only so that we can switch it off for a security fire-drill. Fortunately, this doesn't look like it would be too hard to implement. Doron, what do you think? Would this be worthwhile? Regards, Malcolm
Received on Tuesday, 27 July 2004 03:57:14 UTC