Re: TICKET 259: 'treat as invalid' not defined

On Sun, Nov 7, 2010 at 3:27 PM, Adrien de Croy <adrien@qbik.com> wrote:
> On 8/11/2010 12:17 p.m., Adam Barth wrote:
>> On Sun, Nov 7, 2010 at 3:02 PM, Adrien de Croy<adrien@qbik.com>  wrote:
>>> On 8/11/2010 10:59 a.m., Adam Barth wrote:
>>>> On Sun, Nov 7, 2010 at 12:50 PM, Julian Reschke<julian.reschke@gmx.de>
>>>>  wrote:
>>>>> On 07.11.2010 21:32, Adam Barth wrote:
>>>>>>> On 02.11.2010 03:56, Adam Barth wrote:
>>>>>>>> ...
>>>>>>>> The browser use case proceeds from the following premises.
>>>>>>>>
>>>>>>>> 1) Many servers send invalid messages to user agents.
>>>>>>>
>>>>>>> No data was provided that this is indeed the case for C-D.
>>>>>>
>>>>>> No data was provided that this isn't the case.  Given that we see
>>>>>> invalid message everywhere else, common sense tells us that we will
>>>>>> see invalid messages here too.
>>>>>
>>>>> In the absence of data telling me something else, I'll assume that
>>>>> servers
>>>>> do sane things. I may be wrong. I just don't see why a server would
>>>>> *ever*
>>>>> send two disposition types, given it's a waste of bytes and that it
>>>>> doesn't
>>>>> cause the same thing to happen in different UAs.
>>>>
>>>> Why would a server ever send two Content-Types headers?  Why an HTML
>>>> document ever mis-nest tags?  Why would a server ever send nonsense
>>>> characters instead of an HTTP header?  All these things happen in
>>>> practice because not everyone who operates servers is perfect.
>>>
>>> I think we are getting way off base here.  There are probably at least
>>> trillions of ways in which UAs and Servers can send non-compliant
>>> messages.
>>
>> Thankfully defining how to handle all those trillion ways isn't
>> actually that difficult.
>>
>>> I wouldn't propose modifying HTTP to cover those cases.  It doesn't make
>>> sense IMO for http to define a response for non-compliant behaviour
>>> except
>>> for it to be rejected.
>>
>> Rejecting invalid message is not implementable by browser user agents.
>>  If we'd like our specs to be implemented, that's not an option.
>
> the day the first major browser chooses to do it one of 2 things will
> happen.  If they do it properly that is.  If they don't they are hosed.
>
> If they post a big red page and say
>
> "this website responded in an improper manner, which may have security
> implications.  We are protecting you from this..." or words to that effect,
> then who will the user blame?  The browser or the website?

Honestly, the browser.  We've tried this experiment with mixed content
(HTTP resources inside HTTPS pages).  That's pretty much exactly what
the messaging is in the user interface.  We get feedback from users
that they've switched to other browsers that don't show them these
scary warnings.

Crying wolf doesn't help.  What does help is to show warnings when
there's a real risk of something bad happening (e.g., we've detected
that this web site is actively exploiting users that visit it).

> Then users are more likely to push the website to be fixed, and in any case
> the website wants to be seen, and not be seen as a potential security risk.
>  In fact Google could do this unilaterally with its existing infrastructure
> (the one that tells us a page is unsafe to visit).  Then the other browser
> vendors can breathe a sigh of relief and undo all the hacks they were forced
> (so they believed) to put in to compete with IE.

I even tried calling Chase customer service about the mixed content
warnings on the login page to their banking web site.  They told me to
use Firefox because the warning didn't show in that browser (this was
a case where Chrome's mixed content detector was more accurate than
Firefox's in detecting the vulnerability).

In any case, you should feel free to bring a browser to market that
makes different decisions here.  I'm happy for the market to prove me
wrong, but so far every successful entrant into the market has used
the same strategy here.

Adam

Received on Sunday, 7 November 2010 23:44:33 UTC