- From: Brett Zamir <brettz9@yahoo.com>
- Date: Sun, 14 Mar 2010 09:45:26 +0800
On 3/12/2010 3:41 PM, Anne van Kesteren wrote: > On Fri, 12 Mar 2010 08:35:48 +0100, Brett Zamir <brettz9 at yahoo.com> > wrote: >> My apologies if this has been covered before, or if my asking this is >> a bit dense, but I don't understand why there are restrictions on >> obtaining data via XMLHttpRequest from other domains, if the request >> could be sandboxed to avoid passing along sensitive user data like >> cookies (or if the user could be asked for permission, as when >> installing browser extensions that offer similar privileges). > > Did you see > > http://dev.w3.org/2006/webapi/XMLHttpRequest-2/ > http://dev.w3.org/2006/waf/access-control/ > > ? I have now, thanks. :) Though I regrettably don't have a lot of time now to study it as deeply as I'd like (nor Michal Zalewski's reference to UMP), and I can't speak to the technical challenges of browsers (and their plug-ins) implementing the type of sandboxing that would be necessary for this if they don't already, I was just hoping I could articulate interest in finding a way to overcome if possible, and question whether the security challenges could be worked around at least in a subset of cases. While I can appreciate such goals as trying "to prevent dictionary-based, distributed, brute-force attacks that try to get login accounts to 3^rd party servers" mentioned in the CORS spec and preventing spam or opening accounts on behalf of users and the like, I would think that at least GET/HEAD/OPTIONS requests should not be quite as important an issue. As far as the issue Michal brought up about the client's IP being sent, I might think this problem could be mitigated by a client header being added to indicate the domain of origin behind the request. It's hard to lay the blame on the client for a DoS if it is known which server was initiating. (Maybe this raises some privacy issues, as the system would make known who was visiting the initiating site, but I'd think A) this info could be forged anyways, and B) any site could publish its visitors anyways.) I'll admit this might make things more interesting legally though, e.g., whether the client shared some or all responsibility, for DoS or copyright violations, especially if interface interaction controlled the number of requests. But as far the burden on the user, if the user is annoyed that their browser is being slowed as a result of requests made on their behalf (though I'm not sure how much work it would save given that the server still has to maintain a connection), they can close the tab/window, or maybe the browser could offer to selectively disable such requests or request permission. I would think that the ability for clients to help a server crawl the internet might even potentially be a feature rather than a bug, allowing a different kind of proxy opportunity for server hosts which are in countries with blocked access. Besides this kind of "reverse proxy" (to alter the phrase), I wouldn't think it would be that compelling for sites to outsource their crawling (except maybe as a very insecure and unpredictably accessible backup or caching service!), since they'd have to retrieve the information anyways, but again I can't see what harm there would really be in it, except that addressing DoS plans would need to address an additional header. I apologize for not being able to research this more carefully, but I was just hoping to see if there might be some way to allow at least a safer subset of requests like GET and HEAD by default. Akin to the rationales behind my proposal for browser support of client-side XQuery, including as a content type (at http://brett-zamir.me/webmets/index.php?title=DrumbeatDescription ), it seems to me that users could really benefit from such capacity in client-side JavaScript, not only for the sake of greater developer options, but also for encouraging greater experimentation of mash-ups, as the mash-up server is not taxed with having to obtain the data sources (nor tempted to store stale copies of the source data nor perhaps be as concerned with the need to obtain republishing permissions). >> Servers are already free to obtain and mix in content from other >> sites, so why can't client-side HTML JavaScript be similarly empowered? > > Because you would also have access to e.g. IP-authenticated servers. > As suggested above, could a header be required on compliant browsers to send a header along with their request indicating the originating server's domain? best wishes, Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20100314/cad72fca/attachment.htm>
Received on Saturday, 13 March 2010 17:45:26 UTC