W3C home > Mailing lists > Public > public-html-comments@w3.org > August 2012

Re: Securing Password Inputs

From: Cameron Jones <cmhjones@gmail.com>
Date: Thu, 30 Aug 2012 18:10:29 +0100
Message-ID: <CALGrgeujWsQZi_42ht+mP2JbwgzQTzpDdDCZ0jZD93u2atz7+Q@mail.gmail.com>
To: Jason H <scorp1us@yahoo.com>
Cc: Seth Call <sethcall@gmail.com>, "Thomas A. Fine" <fine@head.cfa.harvard.edu>, "public-html-comments@w3.org" <public-html-comments@w3.org>
On Thu, Aug 30, 2012 at 4:38 PM, Jason H <scorp1us@yahoo.com> wrote:
> Thanks for the feedback.
> Reactions:
> 0. The intent is not to prevent replay attacks (as I understand in this
> context), but to prevent the obtaining of passwords from user table breaches
> due to insecure design. The fact that Sony, Yahoo, LinkedIn (big players)
> cannot get it right shows that the technology overall is failing. We are
> cursed with different password rules at every site, different lengths (some
> even implementing max lengths). People resort to software to manage
> passwords, or worse, give up on security. This is not good.
> Could you elaborate on the replay attack you are considering here?

The replay attack is based on someone listening to the HTTP stream and
using the password contained within to initiate new requests using the
same credentials. Even if the password is hashed by the browser thus
hiding the original plaintext password, the interceptor can still
issue commands via curl or some other non-browser HTTP client to run
privileged actions as the user. So there is no difference between
plaintext and client-side hashing, in this regard.

> 1. I was relatively appauled at the summary "generally accepted as a
> non-compelling enhancement as it does not
> protect against simple replay attacks." Passwords are essentially useless
> now, and I think that is compelling enough. Fixing it at the HTTP level may
> be the right solution, but there's no reason why it can't be layered. It is
> also easier to accomplish in HTML on an application basis, rather than
> making everyone upgrade their servers AND applications.

The inability of large organizations to protect their infrastructure
and users is a concern i share, however given that the source of the
breach is their lack of implementation of security best practices the
consensus was that it is better to promote the "best practice" rather
than add another semi-secure method.

The approach you suggest is essentially an attempt not to protect the
transfer of passwords but to obscure the original password from the
site it is sent to because it is regarded as insecure. The problem
with this is unfortunately it requires the action of the site in order
to implement it, so why would a site implement something which does
not benefit them?

> 2. I did not suggest the origin as the salt, I suggested the action domain
> as the salt. They are not necessarily equal. Furthermore changing the action
> domain is a very rare occurrence, and there are mitigation strategies if it
> really is that important. (Location header, DNS alias, proxy etc.)

Using any part of the domain or action as salt is non-transferrable.
You could use some crafty redirects or whatever, but this is based on
the assumption that the original domain is still under their control.
Given that companies are acquired, merged or split there is no
migration strategy other than requiring users to re-register with the
new domain. From a user experience and relations perspective this is
abhorrent. This is also the attack vector used by phishing emails and
sites which make requests like this and try to get users to re-enter
their credentials.

> 3. Transition from v4 to v5 would be eased if the scheme was adopted for v5,
> then applications could back-port via javascript the same functionality.
> Having a simple, clear mechanism like hashing "domain.com:password" for can
> be trivially done by a standard javascript routine inlcuded in the page, or
> a browser plugin, or the browser itself. When a v5 page is detected the
> plugin/browser becomes a no-op. if the javascript technique is used
> (application supported), then that is moot as well because the application
> page will be HTML version specific, which will then simply not include the
> JavaScript function by matter of design.

The problem i see is that the service only has the hashed password on
file so they are incapable of validating the original. They could
implement their own javascript to apply the encoding for older
browsers but this imposes more work on them for a start, and does not
cater for non-javascript clients.

> 4. The digest mechanism is an excellent approach in terms of security, but I
> fear it will be too much work to properly implement. Salting passwords is
> trivial and we can't even get that out of the big players. As I understand
> it, all authentication should be over HTTPS anyway. Implementing hased
> passwords over SSL seems much easier to achieve than both redesigning
> protocols and servers and applications.

Yes, the only secure method is to ensure authentication happens over
SSL. Using DIGEST, while more work to setup , provides sites with
another authentication option which is relatively secure yet does not
impose the overhead of SSL. Sites using BASIC will have at least some
level of security over sending plaintext passwords in the clear.

Hashing and sending over SSL does not provide any additional benefit
as the entire request is encoded anyway so plaintext is fine to use,
from a transfer perspective.

> Thank you for your reply, and I look forward to continuing this conversation
> with you.

Cameron Jones
Received on Thursday, 30 August 2012 17:10:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:26:28 UTC