W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2014

Re: XMLHttpRequest error events

From: tlhackque <tlhackque@yahoo.com>
Date: Mon, 10 Feb 2014 14:12:56 -0500
Message-ID: <52F924B8.5050102@yahoo.com>
To: public-webapps <public-webapps@w3.org>
Your scenario is that an attacker gets you to visit her website; that 
site feeds you script that does AJAX requests and the resulting errors 
tell ms. evil about your network.

Yes, this can happen.  But you also suggest a perfectly reasonable 
solution.  If any AJAX request fails due to one of those checks, insert 
a delay of 30 seconds for all future AJAX requests made until the 
browser is closed.  This makes the attack unprofitable.   And encourages 
the user to leave the site.

There are other choices:

  * have the browser notify the user: 'This website has attempted to use
    your browser to fetch data from a forbidden source'.  'A script from
    <evil.example.com> attempted to contact 'payroll.example.net', which
    is unreachable'.
    Perhaps make this behavior conditional on a parameter to open().  If
    it annoys the user, it communicates the fact that the website is
    doing something unexpected.  If not, the user isn't going to get his
    work done, so he might as well be told why - which enables a problem
    to be fixed.
    This gives nothing away to the bad guy.
  * Or require compliant browsers to log the details of these errors in
    an about: page.  Perhaps terminate a script that does particularly
    questionable things - denying it the opportunity to report back.
  * Or have a preference option that enables (more) detailed error
    reporting.  Perhaps a site policy that it reverts to 'unset' after a
    period of time, or when a browser closes.

If a user has to take a positive action to get/enable information, the 
risk is much lower.  After all, if the user is the attacker, browser 
security is minimal - writing probing applications is trivial in any 
reasonably modern scripting language.  Your scenario is that of an 
unwitting (hijacked) participant.

I think we should strike a better balance between security via obscurity 
and making web applications easy to maintain, develop and operate.  I've 
suggested some choices to start the conversation.  I don't claim they're 
the final answer.  But applying some creativity to helping the good guys 
- not just resisting the bad guys - is in order.

As things stand, the API denies evil people some information - but it 
also denies the actual users and their developers the information that 
they need to solve problems.  This is effectively a denial of service 
attack - without the attackers lifting a finger.  Applications take 
longer to develop, cost more to diagnose, maintain and operate.

On 10-Feb-14 13:14, Nick Krempel wrote:
> You misunderstood: not only a failed CORS check but any error 
> occurring *before* the CORS check would need to be reported as a 
> generic NetworkError without further diagnostics.
> Otherwise you could write a script to probe the local network by 
> firing off (failing) CORS-enabled XMLHttpRequests to various IP 
> addresses / local DNS names and get a treasure trove of useful 
> information (for attackers).
> This is already possible to some extent via a timing attack, but at 
> least that's more work for the attacker, and is something that could 
> be mitigated by user agents in the future through the insertion of 
> artificial delays for failed cross-origin fetches.
> Nick
> On 10 February 2014 18:01, tlhackque <tlhackque@yahoo.com 
> <mailto:tlhackque@yahoo.com>> wrote:
>>     for security reasons cross-origin fetches would not be able to
>>     provide these diagnostics 
>     In that case, the error event should report "Forbidden by
>     cross-origin policy".   At least that gives a clue - we know it's
>     not a DNS failure or an unplugged cable or...  In this case, not
>     directly useful to the end-user, but when (s)he reports it to the
>     help desk, the person they escalate it to will have a place to
>     start.  And a developer will (should) know what to do when his
>     testing(?) encounters the problem.
>     Hiding the failure cause does nothing to improve security, rather
>     it makes diagnosing issues and writing good code harder.  As such,
>     it's more likely to cause people to write overly permissive code
>     'to get it working'.   The bad guys know what rule they're trying
>     to break.  So the current behavior really only hurts the good gusy.
>     On 10-Feb-14 12:16, Nick Krempel wrote:
>>     This sounds nice, but for security reasons cross-origin fetches
>>     would not be able to provide these diagnostics unless the fetch
>>     got as far as passing the CORS check.
>>     On 10 February 2014 12:30, tlhackque <tlhackque@yahoo.com
>>     <mailto:tlhackque@yahoo.com>> wrote:
>>         I found myself using XMLHttpRequest in an application where,
>>         when things go wrong, I wanted to provide information to the
>>         user.  I was unable to do so.
>>         Specifically, I POSTed a form with a large file, and
>>         specified listeners for abort and error events (among others)
>>         on both the request and upload targets.  I tried
>>         disconnecting the network, shutting down the target
>>         webserver, mis-spelling the host name, having the server
>>         refuse service and injecting various other real-world
>>         misadventures.
>>         Although I get the events in several browsers, nothing in the
>>         event tells me what went wrong.  And I find nothing in the
>>         specification (
>>         http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html -
>>         22-Nov-2013) to help.
>>         According to the spec, in the synchronous model, the
>>         exception would at least specify 'NetworkError'.  Async, the
>>         only thing one gets is the event type, bytes transfered, and
>>         (possibly) total bytes.  One can assume if loaded is zero, no
>>         connection was established; otherwise there was a connection
>>         failure, but that's about it.
>>         It really would be useful - both for debugging and for
>>         providing feedback to users - if the error and abort events
>>         provided some detail:
>>         Was the problem:
>>            DNS failure: no server, host has no A(AAAA) record?
>>            Proxy not reachable?
>>            Host not reachable?
>>            SSL certificate expired?
>>            Connection broken?
>>            Aborted by user action?
>>            Aborted by object destruction?
>>         Some supporting detail (e.g. IP address of peer, proxy, etc)
>>         would be helpful too.
>>         This is not intended to be an exhaustive list.
>>         While I would discourage client scripts from trying to
>>         analyze OS-specific error codes, some user-actionable clues
>>         would be really helpful.  'An error occurred sending
>>         <flilename>' is about the best one can do currently, and that
>>         doesn't give the user - or the helpdesk - much to go on!
>>         Please consider updating the spec to provide reasonable
>>         diagnostics for network error events.
>>         -- 
>>         This communication may not represent my employer's views,
>>         if any, on the matters discussed.
>     -- 
>     This communication may not represent my employer's views,
>     if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.
Received on Monday, 10 February 2014 19:13:28 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:14:21 UTC