W3C home > Mailing lists > Public > public-wsc-wg@w3.org > July 2007

Re: ACTION-240 :TLS errors...

From: Serge Egelman <egelman@cs.cmu.edu>
Date: Tue, 10 Jul 2007 00:45:24 -0400
Message-ID: <46930EE4.60000@cs.cmu.edu>
To: Stephen Farrell <stephen.farrell@cs.tcd.ie>
CC: W3C WSC Public <public-wsc-wg@w3.org>



Stephen Farrell wrote:
> 
> 
> Serge Egelman wrote:
>> Okay, hopefully this is better articulated:
>>
>> Everyone who has said something about this issue is thinking as an IT
>> expert.  
> 
> That is true, but seems to ignore the mail that started this
> thread. If you want to disucss a different topic (e.g. how to
> properly display PKI information to end-users, then that belongs
> in a thread of its own IMO).
> 
> Let's try this way, ignoring SSC for now (since I personally
> don't know what I think is best there). If a UI never displays
> any detailed TLS error, (the supposition of this thread) then
> no PKI details will be presented as a result of such errors.
> Therefore some bad actors will be forced to use real certs,
> either by acquiring them (paying or hacking a CA) or else
> by hacking a web site. In each case, the hacker leaves more
> trace and accountability is improved due to the use of the PKI
> (unless the hacker can steal a private key from a web site, at
> which point the trace is very indirect) . That is IMO
> unequivocally a good thing.


I simply responded to a message which held the position that SSCs are
much higher risk.  A point, which, at the time you seemed to disagree
with as well.

I invite you to go here: https://www.ihopethisprovesmypoint.info/
Check out the WHOIS and information on the certificate.  It came to a
whopping $22.18 for the domain and certificate.  PayPal was used.  If a
phisher did this, no doubt they would use a stolen PayPal account.  So
once again I ask, what more accountability do non-high-assurance
certificates provide?

> 
> Your second argument seems to be about risk. I think that is
> also misplaced. We here can discuss vulnerabilities and
> countermeasures, but we cannot know, other than in the most
> general terms about actual risk. For example, the W3C site
> seems to be a popular target for attack, though it is not
> involved in commerce. The recent reports of hacks against
> Estonia were also apparently not dependent on commerce. So,
> there are too many unknowns for us to do a real risk analysis.

That's absurd, we absolutely can know about risk!  Rather than arguing
based on personal opinions, we should be looking at real world data.
Look at how many attacks occur due to various PKI problems.  Look at
revocation data.  Look at data from user studies on which attacks are
most successful.  Sure, there's a lot of ground to cover, but if we're
going to make a recommendation that has some meaning beyond just the
opinion of a bunch of random people, we should back it up with numbers.
 If we're going to spend lots of time addressing a particular risk, we
should examine how likely that risk is to occur.

> 
> Lastly, below you'll see some blow-by-blow responses. Its
> probably counter productive to continue in that style further,
> but I wanted to try show why I (coming from the PKI technology
> side) find it hard to deal with how you're framing the
> discussion here. I'm really not trying to beat-up on you, but
> the way you've phrased these messages makes it very hard to
> keep the discussion on track IMO.
> 
> S.
> 
>> You're not thinking like an end-user.
> 
> Fair enough. But neither are you, end users would not raise
> the revocation related issues you have.

I wasn't the one to raise the revocation issues initially :)

> 
>> If you encounter a website that contains an expired certificate, what do
>> you do?  
> 
> Doesn't happen in the posited future. You only encouter sites where
> TLS fails or works - see the start of the thread.

I'm not talking about expired certificates under this proposal.  I'm
talking about how we currently deal with expired certificates.  Again,
before making a recommendation, we should look at real world data.  If
we're going to decide whether TLS fails or works when you encounter an
expired certificate, we need some sort of basis for making that
decision.  My point was that it's fairly nuanced.  As experts we
currently do not make blanket accept/reject for expired certificates.
If we did reject all of them, then I believe making TLS fail is
perfectly acceptable.  But in reality there are many situations where
that may not be desirable.  Before making a decision on this, we need to
do a cost-benefit analysis.  For instance, if there are many times when
you would want to proceed to the site, and we examine actual data and
find that a very tiny fraction of expired certificates are being used
for nefarious purposes, then we should probably recommend ignoring
expired certificates.  But again, we can't just make this decision based
on personal opinions.

> 
>> I highly doubt you apply one decision (i.e. "continue to
>> website" or "go somewhere else") to all such situations consistently.
>> This is a very subjective decision.  As an expert, we see these warnings
>> and then take other factors into account when determining whether to
>> submit information.  
> 
> That is not where we started, which was to recommend that such
> warnings are not dislayed.
> 
>> The average end-user does not do this.  The average
>> end-users will see the warning, glance at the page in the background,
>> and if the site "looks" authentic, they will ignore the warning.  The
>> average user cannot make an informed decision in this situation because
>> they do not have the domain knowledge (no pun intended).
>>
>> In such situations, you need warnings to interrupt the user's task, and
>> convince them that proceeding really isn't a good idea.  
> 
> No, "need" above is wrong. You can block sites. For some, and arguably
> all (non SSC), errors that is defensible and involves no warnings.

Yes, warning/error/what-have-you.  You block the site and bring up a
message saying there was a security-related problem.  We're wasting our
time by arguing about semantics here.

> 
>> However, for
>> such warnings to be effective, they need to be used rarely!  Otherwise
>> you start having to deal with habituation, and we begin training users
>> to ignore these new warnings.
> 
> Correct, but not necessarily relevant.
> 
>> Therefore, if we're going to talk about making effective SSL warning
>> messages, we need to narrow down the situations in which they'll be
>> used.  Sure, you're absolutely right, there are risks to visiting sites
>> with SSCs, I'm not disputing that at all.  However, the probability of a
>> user being exposed to these risks are miniscule (all things being
>> relative).  
> 
> I can see no basis on which to make that last statement.

So you believe that a substantial proportion of sites with SSCs are for
nefarious purposes?  With the work I've done on phishing, I've yet to
encounter a site with a self-signed certificate, though I routinely
interact with legitimate sites with SSCs.  I've also seen plenty of
phishing sites with CA-issued certificates though.  Phishers currently
have very little incentive to use an SSC: most browsers currently warn
about SSCs, so this warning may cause the user to be more security
conscientious.  Whereas it's unlikely that a user will notice the
absence of a certificate.  Sure, this is anecdotal, we should try and
find some numbers before rushing to judgment.

> 
>> We need to put all of these threats in perspective when
>> determining when to display warning messages, because again, if we warn
>> about every conceivable risk, the warnings become useless.
> 
> Ignores the point of the thread.

Absolutely not.  If users routinely interact with sites with expired
certificates or SSCs, and they know and trust those sites, habituation
occurs.  Blocking all sites in that category will be both a nuisance and
ultimately prove ineffective when users start circumventing the block
message for every site that it appears on (because now they've lost
trust in it).

> 
>> Think of it this way: if you see a warning, as an expert, that asks you
>> to make a very subjective decision, how reasonably well should we expect
>> the average AOL user to make that same decision?  Will the average user
>> inspect the certificate?  Does the average user even know what a
>> certificate is?  If we're going to be forcing users to look at
>> certificates, we've already failed.  So this means that we should
>> automate as much as possible.  This begs the next question: which of
>> these risks are realistic enough that we want to block access?
> 
> Ignores the point of the thread.

How?  If we're going to decide to just show "TLS error" and block
access, we need to figure out the scenarios for that.  See above.

> 
>> If people want to advocate for blocking or warning about every site 
> 
> So far, it seems like you're the only one arguing for any TLS related
> warnings (wrt revocation).

Huh?  I think we're arguing over semantics again.  Block/warn, the
user's task is interrupted, and they're told a security error has
occurred.  I'm pretty sure others have mentioned a need for throwing an
error when a revoked certificate has been encountered.

> 
>> that
>> uses an SSC 
> 
> SSCs do need separate consideration.
> 
>> or an expired cert,
> 
> There are O(100) different possible reasons, in addition to
> expiry.
> 
>> I think you'll quickly find that users
>> will either get around the blocking and continue to these sites anyway,
> 
> If you mean by falling back to plaintext, sure, that's possible, but
> if that's a clincher we should just fold the tent now since it means
> that its not possible to display primary SCI in a meaningful way.

That's not what I meant.  A well-designed warning presents the user with
a choice and a recommended course of action.  If the error isn't a big
deal, the user should be able to proceed anyway at their discretion.
Without the ability to overrule the software when circumstances present
themselves, users will use other software that does provide that ability.

> 
>> or such sites will all start getting cheap certificates (the $20
>> analogy).  
> 
> "Analogy" is just careless, low-assurance is not an analogy
> but one amongst the various possible reasonable PKI deployment
> scenarios. There is plenty of history to back up the fact that
> low-assurance is reasonable.

Reasonable for what?  Again, see the website above.  That's a
low-assurance certificate.  It doesn't matter, it could have been bought
 completely anonymously, the root is in all major browsers, and makes
the chrome do fancy things (that most users don't notice).  If we say
these are all trusted and block everything else below them, these will
increasingly be used for fraud.

Look at spam.  Spammers used to use open relays, but many of those shut
down.  So they realized there was a cost of doing business and started
paying for servers (not to say that many spammers still don't use open
relays and zombies, but closing down open relays has certainly caused
many to pay).

> 
>> While the latter would imply that the warnings are working,
>> it also means that we really haven't done anything, we've just shifted
>> the problem.  We've forced that class of websites to shell out $20, but
>> effectively haven't accomplished much more.
> 
> I disagree with you there. We would have more accountability and
> would have made it easier to trace the bad guy. Secondly, your argument
> there applies to any countermeasure - "if we do <foo> then the bad
> actors will simply do <bar>, therefore we shouldn't do <foo>" and is
> not IMO good logic.

Why not?  It's an arms race.  That's always been the case.  If we come
up with recommendations based on opinions with foreseeable consequences,
we're wasting everyone's time.  Again, you haven't shown that there's
any more accountability when a CA-issued certificate is purchased by a
phisher using a stolen account.

> 
> Secondly, if you've ever dealt with spam, you'll know that not needing
> to shell out $20 is bad if it benefits a bad actor.
> 
>> This is where we seem to disagree: whether an SSC is as secure as a
>> low-grade certificate.  
> 
> "low-grade" means what? "low-assurance" has a well known non-pejorative
> meaning here, but not synonymous with "bad" as you seem to be implying.

Yay!  More semantics!  If you're arguing that they're to be used for
determining the identity and trustworthiness of a given site, then yes,
they are bad.

> 
>> There are two main differences: the domain
>> ownership verification and the ability to do revocation.  
> 
> I don't think that's all. There are high-assurance PKIs
> that don't involve DNS name verification of any sort (e.g.
> corporate PKIs). There are low assurance PKIs that support
> revocation models that are as good as much higher-assurance
> CAs (e.g. most all of them:-). There are PKIs where both are
> reasonable but where many other things are odd (e.g. bridge
> CAs).
> 
> So I see no basis on which to make your assertion.
> 
>> So far no one
>> has made a convincing argument that passing a domain ownership test to
>> purchase a low-grade certificate is sufficient to prove that a site is
>> not malicious.  
> 
> Of course not. Making such an assertion would be silly. Passing such
> a DNS test improves accountability, which is different.

But I'm talking about accountability relative to an SSC.  The
accountability is marginal, if anything.  If you have the ability to
install the SSC in the first place, you clearly have free reign over the
web server.  Whereas the low-assurance certificate proves reign over the
domain registration.  So with the SSC you *may* not be able to alter the
DNS information to redirect users to a different site, but you still own
the original web server!

> 
>> Regarding revocation, if the owner of an SSC has reason
>> to believe that it's compromised, 
> 
> Certs are not compromised. Private keys are exposed, or else public
> keys are factored. The latter is ignorable in this context, the
> former usually undetected. (Discussing in the face of such
> inaccuracy is hard.)

The trustworthiness of an SSC is compromised if the private key is
exposed, the public key is factored, etc.  You can argue semantics all
you want, but making ad hominem attacks isn't going to win you many
arguments.

> 
>> they can just generate a new one
>> (whereas the owner of a CA-issued cert would fill out a revocation
>> request and request a new one).  
> 
> I see no point in that statement.
> 
>> Sure, a lazy person might not
>> regenerate the SSC, but then again a lazy person might not fill out the
>> revocation request either.  
> 
> I see no point in that statement.
> 
>> I'm also not sure there are any convincing
>> arguments that the private keys are going to be kept safer in either
>> scenario either (I'm not talking about the CA root key).
> 
> I see no point in that statement.

The argument that was presented that CA-issued certificates are superior
because of revocation.

> 
>> Anyway, this is tangential to my main point, which was we need to be
>> focusing on what to display to the user.  Before whining about all the
> 
> "Whining?" That doesn't help with discussion.
> 
>> possible risks for every PKI-related scenario, you need to be asking
>> yourself: What is the likelihood of this threat? 
> 
> See above wrt risk analysis.
> 
>> If we cannot automate
>> the decision (allow/deny) and must display a dialog box to the user,
>> will grandma be able to make the right decision?
> 
> So, you start by ignoring the fact that this thread is about
> automating handling of TLS errors and the end by chastising someone
> (who?) for whining about something that grandma won't understand and
> for which no one else is arguing.
> 
> 

I'm a bit confused, I was in complete agreement with your first message
in this thread.  Honestly, I don't remember where I was going with that
previous bit, and don't particularly feel like digging through all my
old mail to remember the context. :)

serge

-- 
/*
Serge Egelman

PhD Candidate
Vice President for External Affairs, Graduate Student Assembly
Carnegie Mellon University

Legislative Concerns Chair
National Association of Graduate-Professional Students
*/
Received on Tuesday, 10 July 2007 04:45:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 5 February 2008 03:52:49 GMT