- From: Stephen Farrell <stephen.farrell@cs.tcd.ie>
- Date: Tue, 19 Feb 2008 00:06:51 +0000
- To: "Yngve N. Pettersen (Developer Opera Software ASA)" <yngve@opera.com>
- CC: Serge Egelman <egelman@cs.cmu.edu>, "'W3C WSC Public'" <public-wsc-wg@w3.org>
I suspect (but don't know) that there may be issues with changing URLs, and/or corporate policies. Were successful OCSP interactions very common for almost all combinations of browser, CA and connectivity, then I'd have no problem moving that example into the 2nd bucket but I don't think I've ever seen evidence to that effect. (I'd also have no problem myself if the net effect of our REC set a somewhat higher barrier here for commercial CA operators.) OTOH, I do notice more s/w pinging CA sites lately, so this may be something that's changing fairly rapidly but that may be more to do with code signing. For CRLs, they should definitely be in bucket #1, even though they serve exactly the same function. I guess the same would be true of SCVP (or even XKMS) as it spreads (if it does). Actually, that last suggests that the buckets map to both impact and ubiquity and that the same technology might migrate from bucket #1 to #3 if it becomes wildly successful in terms of deployment. Suggests an IANA style registry rather than text in a REC maybe? (Not sure myself.) S. PS: Taking a quick peek into this, I came across a Nokia support page [1] that really shows how badly wrong we've gone with PKI and GUIs. Hitting the "most common problem" list for devices that are made in the millions per month is a real bummer :-( [1] http://wiki.forum.nokia.com/index.php/S60_SW_installer_troubleshooting Yngve N. Pettersen (Developer Opera Software ASA) wrote: > > > We experienced several OCSP related difficulties back in Opera 8.5+ when > we made a failure to get a valid OCSP response a fatal error (equivalent > to revoked), to the extent that we added a preference to disable OCSP in > succeeding 8.x releases, and in 9.x we just downgraded the security > level by one (to two). > > This problem, which was not due to an OCSP server going offline, but > instead returning "unauthorized" or other errors, happened several times > over many months, the problems many times taking weeks to be resolved. > IIRC these episodes did lead the CAs in question to implement better > checking of the OCSP repositories. > > I can't recall having seen more than one or two OCSP problems since, but > as we now just remove the padlock, we might not get as many reports > about such problems. > > > On Tue, 19 Feb 2008 00:14:55 +0100, Serge Egelman <egelman@cs.cmu.edu> > wrote: > >> >> Okay, I think I agree with you about this example. I have no idea how >> frequently certificate status is unavailable. I would certainly put >> SSCs in >> bucket one due to their prevalence, and if certificates with unknown >> status >> are as prevalent, they certainly belong there too. >> >> serge >> >> -----Original Message----- >> From: public-wsc-wg-request@w3.org >> [mailto:public-wsc-wg-request@w3.org] On >> Behalf Of Stephen Farrell >> Sent: Monday, February 18, 2008 1:38 PM >> To: Serge Egelman >> Cc: W3C WSC Public >> Subject: Re: ACTION-389: Error levels? >> >> >> >> I think I like the 3 categories, but would put lack of >> cert status info into bucket 1 since its often not >> available. >> >> Would also be interested in more examples - I guess the >> concern would be that one of the buckets might overflow >> which'd devalue the categorisation. >> >> S. >> >> Serge Egelman wrote: >>> Well, the idea is to create three buckets of risks: >>> >>> 1) Things that *could* be bad, but we really don't have sufficient >> evidence >>> >>> 2) Things that are *likely* bad, but we're not absolutely positive so we >>> can't block the page outright. More importantly, this category exists >>> because we don't want to habituate users to the most severe warnings by >>> showing them in situations where there are likely to be false positives >>> (e.g. determinations made solely by heuristics or when not enough >>> information is known, but there is sufficient information to raise >>> concern). >>> >>> 3) Things that are *known* to be bad. These warnings appear only when a >>> real threat has been identified. These warnings must not be shown when >>> there is a chance of false positives, as this will habituate the users >>> to these warnings and they will become useless in all other cases. >>> >>> Thus, I think that heuristics and a CRL which can't be located would >>> both fall into the middle category. These are both things that should >>> raise some concern, however we cannot be confident that something bad is >>> going to happen. >>> >>> >>> serge >>> >>> Stephen Farrell wrote: >>>> >>>> >>>> Text looks good, but I think more/other examples would help. >>>> >>>> In particular I'd not equate "missing CRL" with "phishing >>>> heuristic triggered," but I guess we can discuss that sometime, >>>> >>>> S. >>>> >>>> Serge Egelman wrote: >>>>> >>>>> Here's the proposed text (I know it could use some work, but this is >>>>> a first pass), though I'm not sure which section to put it in: >>>>> >>>>> Browser security indicators MUST fall into one of the following >>>>> categories: >>>>> >>>>> 1) Notifications/Status Indicators >>>>> a) WHAT: warnings/indicators that are displayed in the browser's >>>>> persistent primary chrome. These indicators MUST NOT force user >>>>> interaction (e.g. forcing the user to click a button to continue the >>>>> primary task). They MUST be located in the browser's chrome and >>>>> include a succinct textual description of their meaning. >>>>> b) WHEN: the browser cannot accurately determine a security risk >>>>> based on the current security context information available. These >>>>> indicators SHOULD also be used for situations where the risk level >>>>> may vary based on user preference. >>>>> >>>>> 2) Warning/Caution Messages >>>>> a) WHEN: these MUST be used when the system has good reason to >>>>> believe that the user may be at risk based on the current security >>>>> context information, but a determination cannot positively be made >>>>> (e.g. CRL cannot be located, OCSP server unresponsive, phishing >>>>> heuristics triggered). These warnings SHOULD be used if the >>>>> likelihood of danger is present, but cannot be confirmed. >>>>> b) WHAT: these warnings MUST be designed to interrupt the user's >>>>> current task, such that the user must acknowledge the warning. The >>>>> headings of these warnings MUST include the words "warning" or >>>>> "caution," and they MUST NOT include technical jargon, or be longer >>>>> than a dozen words. The headings of these warnings MUST be the locus >>>>> of attention, and the warning SHOULD have an option for advanced >>>>> users to request a detailed description of the warning condition. >>>>> These warnings MUST provide the users with options on how to proceed >>>>> (i.e. the warnings MUST NOT use a single option to dismiss the >>>>> warnings and continue). The options presented on these warnings MUST >>>>> be descriptive to the point that their meanings can be understood in >>>>> the absence of any other information contained in the warning. These >>>>> warnings SHOULD include one recommended option, and a succinct text >>>>> component denoting which option is recommended. In the absence of a >>>>> recommended option, the warning MUST present the user with a method >>>>> of finding out more information (e.g. hyperlink, secondary window, >>>>> etc.) if the options cannot be understood. >>>>> >>>>> 3) Danger Messages >>>>> a) WHAT: These warnings MUST be designed such that the user's task >>>>> is interrupted, and the user is unable to view or interact with the >>>>> destination website. The headings of these warnings MUST include the >>>>> word "danger," and they MUST NOT include technical jargon, or be >>>>> longer than a dozen words. The heading MUST be the locus of >>>>> attention, and the warning SHOULD have an option for advanced users >>>>> to request a detailed description of the warning condition. >>>>> b) WHEN: these MUST be used when there is a positively identified >>>>> danger to the user (i.e. not merely risk). Examples include websites >>>>> or software downloads that have been blacklisted (i.e. positively >>>>> identified), revoked certificates, etc. >>>>> >>>>> >>>> >>> >> >> > > >
Received on Tuesday, 19 February 2008 00:07:36 UTC