Re: TLS with integrity protection & no encryption ( ACTION-209 & ACTION 260)

In Opera these methods are disabled by default. When enabled and chosen by  
the server they cause a warning dialog to be displayed. These methods are  
considered Level 1, and therefore does not get a padlock.

These methods will be most useful for particular (possibly automatic)  
applications where we know what kind of data are to be sent and received,  
and how nonsensitive they are, except that we want to preserve integrity.


On Wed, 25 Jul 2007 15:34:53 +0200, Luis Barriga (KI/EAB)  
<luis.barriga@ericsson.com> wrote:

>
> TLS supports server authentication with integrity protection and no
> confidentiality (NULL encryption). There is even RFC4785 that specifies
> this mode for the pre-shared secret version of TLS (RFC4279).
>
> Would such a page be considered "secure"? Which SCI should be displayed
> for this case? A transparent padlock? :-)  (https is there though)
>
> I don't have any statistics on how widely such (esotheric?) mode is used
> on the Internet, but it should be.
>
> The reason is that most users and HTTP web sites *implicitely assume
> that this the the case today*, i.e. they assume that whenever users
> access a certain site, they are reaching the right site and the
> retrieved information has not been manipulated on the way to their
> browser. But we know this is not always true and may become worse.
> Confidentiality is not an issue.
>
> This (esotheric) mode should be (and may become) the default for HTTP
> sites that care about information integrity.
>
> Luis
>
> -----Original Message-----
> From: public-wsc-wg-request@w3.org [mailto:public-wsc-wg-request@w3.org]
> On Behalf Of Yngve Nysaeter Pettersen
> Sent: den 7 juni 2007 01:32
> To: Mary Ellen Zurko
> Cc: public-wsc-wg@w3.org
> Subject: Re: ACTION-209: What is a secure page?
>
>
> Hello Mex,
>
> Thanks for the comments.
>
> On Wed, 06 Jun 2007 15:56:09 +0200, Mary Ellen Zurko
> <Mary_Ellen_Zurko@notesdev.ibm.com> wrote:
>
>> "Some clients give a warning (which can be disabled) when the
>> displayed document changes from an unsecure to a secure mode, or vice
> versa, "
>>
>> It seems patently obvious to me that a "warning" is the last thing you
>
>> want when things are getting more secure. I can't figure out how this
>> convention ever got started (except I can, it was the easy thing to do
>
>> quickly with little thought).
>
> "Warning" may be the wrong word in this case, "information dialog" may
> be better.
>
> Whatever you call it, most users probably disable it, and the
> corresponding "leaving secure mode" dialog, the first or second time
> they see it, just like the unsecure form submit dialog.
>
> If I recall correctly this dialog was present at least 10 years ago,
> probably earlier. A possible reason may have been to highlight
> (advertise) secure connections at a time when they were new. We would
> probably have to ask some of the people involved with the major browser
> projects at the time to find out exactly why.
>
>> "All login forms to a secure service must be served from a secure
>> server, and must not not be included inside a page containing unsecure
> content. "
>>
>> For understandability and conformance, you'll need to use differenct
>> words for "secure" and "unsecure", and/or define those words. The
>> definitions you would propose may be implicit in your lead in
>> material; call them out explicitly. I need them for several other
>> proposals you make.
>
> I am open to suggestions for alternative wordings.
>
> Essentially (in web-context), by "secure service" I mean a (presumably
> sensitive) service hosted on a ("secure") HTTPS server (Personally, I
> also include encryption strength in the consideration). "Unsecure
> content/service" is content/service served by a non-HTTPS, HTTP-only
> server (in the particular quoted context it is intended to head off the
> use of a HTTPS iframe to hold the login form in a HTTP page in order to
> say "But ... the login form IS hosted on a _secure_ server" (how would
> the user be able to tell?).
>
>
>> "If a service require secure login,"
>>
>> I'm not a fan of insecure logins. You seem to be allowing their
>> existance.
>> I think that's a bad idea.
>
> It is more an acceptance of reality; there are many sites with login,
> such as forums, that does not have a https server.
>
>> "then all transactions/presentations based on those credentials must
>> be protected by the same level of security. "
>
> My point is that if a site is using SSL/TLS for login to ensure
> confidentiality etc., then the authorization credentials cannot be made
> available to the unsecure parts of the service, and that all parts of
> transactions which require those credentials also have to be performed
> over SSL/TLS.
>
> Examples of such services are online bookstores, like Amazon that know
> who you are even in the unencrypted parts of the site. I haven't
> checked, but I certainly hope they are not sending the one-click
> authorization cookies to the unecrypted servers.
>
>> You need to define levels of security. I need that for some of your
>> other proposals as well.
>> I'm concerned that that might be out of our scope, but not certain. I
>> could see some Good Practice recommendations around how difficult it
>> is for users to understand/track varying levels of security and the
>> importance of consistency within some scope/context.
>
> My primary point here is to not jump back and forth between encrypted
> and unencrypted parts of the site just to save a few CPU cycles on the
> server.
>
> As for scope, I would consider this a recommended (and obvious)
> authoring technique.
>
>> "Cookies on unsecure connections are vulnerable to interception, and
>> can be used for replay attacks even if they were set by a secure
>> server, and servers should not set credential cookies from secure
>> servers that can be sent unencrypted. "
>>
>> I personally wholeheartedly agree, but am almost sure this is out of
>> scope. I encourage you to put anything that seems to be out of scope
>> in http://www.w3.org/2006/WSC/wiki/FuturesAndOnePluses
>
> It is meant as part explanation for why jumping back and forth between
> secure and unsecure servers is not a good idea (information leakage).
> The specific point about how to set the cookies is probably out of
> scope, though.
>
>> "Change from and unsecure to secure parts of a service should be done
>> by direct links, and not redirects. If unsecure->secure redirects are
>> needed then the redirect should be immediate, and not multistep. "
>>
>> I can't quite tell how this is in scope. How does it relate to SCIs
>> and helping users with trust decisions?
>
> The initial implementaion of Opera's padlock was that if even the first
> load of a redirect sequence was from an unsecure server then we
> displayed an open padlock.
>
> Presently, we have been forced by the extensive use of "Sign in" buttons
> that do an unecrypted HTTP 302 redirect to https to permit the security
> level to be reset to Level 3 if the user initiated the action.
>
> My point is that for a page to be considered secure all elements of the
> page, even the initial redirects should be secure.
>
> Or to put it in terms of RFC 2965 (Cookies) sec 3.3.6
>
>     A transaction is
>     verifiable if the user, or a user-designated agent, has the option
> to
>     review the request-URI prior to its use in the transaction.  A
>     transaction is unverifiable if the user does not have that option.
>     Unverifiable transactions typically arise when a user agent
>     automatically requests inlined or embedded entities or when it
>     resolves redirection (3xx) responses from an origin server.
>     Typically the origin transaction, the transaction that the user
>     initiates, is verifiable, and that transaction may directly or
>     indirectly induce the user agent to make unverifiable transactions.
>
> IMO the transition HTTP->HTTPS from a redirect is an unverifiable
> transaction. The user cannot inspect the frontpage URL and find that it
> goes to a secure server, and therefore do not know where he will end up.
>
>> "Do not POST passwords from an unsecure page (even if the form is in a
>
>> "secure" frame) to a secure server."
>>
>> This does seem out of scope. In fact, it seems like the TAG password
>> finding that has stalled:
>> http://www.w3.org/2001/tag/doc/passwordsInTheClear-52.html
>
> Authoring techniques are part of the charter (in fact, secure login
> forms is mentioned), although I am not sure active methods to enforce it
> is.
>
> In this case we are talking about examples like the Chase bank <URL:
> http://www.chase.com/ > having a login form for their netbanking on
> their unsecure homepage (with a padlock on top).
>
> In this particular use-case no client is, currently, displaying any SCI
> that indicates, before or after the submission, that the action is
> unsecure.
>
> One possible way the SCI could be used in this case (if we do not
> mandate clientside blocking) could be to not display a padlock for the
> server, or any servers it references, at anytime in that run of the
> client. Using cookie tainting it might even be possible to make it
> persistent (Opera has a similar feature that marks all apparently login
> protected pages  as such and deletes them on exit, and does not register
> them in the history).
>
> A problem with this type of forms is that the sites are saying 1)
> "everybody is doing it" and 2) "the credentials are sent encryption"
> never stopping to think that the form could have been infected with a
> trojan en route to the user and that the password could have been stolen
> before the user hit submit.
>
> I prefer a full block of such forms, because that reduces the chance
> that a site will say "Ignore the missing padlock" or put up their own,
> and warnings are of course just irritating speedbumps. A block would
> however require a coordinated launch effort from the vendors to have
> effect.
>
>> "Should not display a padlock if (at least) one of the resources
>> required user interaction to accept the certificate of the server "
>>
>> I disagree that self signed certificates are inherently less secure
>> than CA generated ones. My enterprise runs on them. I do continually
>> find it frustrating that there's not some administrative way to push
>> down certificates, so that this use case can be differentiated from
>> the "random prompt and accept" use case. I would rather think through
>> recommendations around the user trust decision to accept a
>> certificate. It clearly violates safe staging the way it is presented
>> today (what basis does the user have to accept such a certificate?).
>> Why not use the certificate for the encryption but not the
>> authentication (until the user has some reason to make an
>> authentication decision)?
>
> Selfsigned certificates are just part of the area this is covering, and
> selfsigned certificates can be installed as roots (even though I would
> want to move selfsigned server certificates into a non-CA section)
>
> Other problems are:
>
>    - Missing CA certificates (roots excepted, those can be installed)
>    - Mismatch of hostname
>    - Expired certificates
>    - Weak encryption
>
> These are the primary target for this particular proposal.
>
>
> --
> Sincerely,
> Yngve N. Pettersen
>
> ********************************************************************
> Senior Developer		             Email: yngve@opera.com
> Opera Software ASA                   http://www.opera.com/
> Phone:  +47 24 16 42 60              Fax:    +47 24 16 40 01
> ********************************************************************
>
>



-- 
Sincerely,
Yngve N. Pettersen

********************************************************************
Senior Developer                     Email: yngve@opera.com
Opera Software ASA                   http://www.opera.com/
Phone:  +47 24 16 42 60              Fax:    +47 24 16 40 01
********************************************************************

Received on Wednesday, 25 July 2007 13:59:31 UTC