Re: How HTTP 2.0 mandatory security will actually reduce my personal security

I don't have a problem with not speccing it.

I'd just turn it off for any servers that I could influence since I don't
want to lose users.

We can frown on guidance like this, but, well, we have broken the internet
or allowed it to become broken, and we need solutions today and tomorrow
that will work reliably.

It really all comes down to the issue if providing a service to users
reliably.

I am not interested in attempting to work with implicit proxies- it is too
difficult.
I AM interested, however in working with explicit proxies, and even
submitted and contributed to a couple of drafts on the topic.

-=R
On Nov 15, 2013 7:49 AM, "Michael Sweet" <msweet@apple.com> wrote:

> Roberto,
>
> On Nov 15, 2013, at 3:00 AM, Roberto Peon <grmocg@gmail.com> wrote:
>
> There is explicitly an option for unencrypted HTTP/2, but not over the
> "open" internet, since that is known/provent to be unreliable.
>
>
> I have a real problem with trying to spec “don’t do this over the open
> Internet”.
>
> And IIRC the IETF frowns on protocols that special-case local usage - the
> assumption that Internet access to http:// URIs usually occurs through
> proxies while https:// URIs are “safe” has already been shown to be
> invalid, and I suspect that organizations that deploy proxies may not limit
> their use to “outside” traffic, so local access may indeed have the same
> issues as remote, for both http:// and https:// URIs.
>
> The challenge, then, is how we can work with the HTTP/1.1 rules for
> proxies instead of trying to work around them.  Otherwise we really do have
> a completely new protocol and need to treat it as such (new port number,
> http2:// URIs, etc.)
>
>
> And in my personal opinion, HTTP is a poor mechanism for cached content:
> it allows for a very limited distribution model and (amongst other things)
> doesn't adequately differentiate between resources that should be public,
> but verifiably unmodified, and private resources.
> I wish that we had a different protocol (and I've been talking about this
> for a while actually) for public, cacheable content. I've proposed such in
> the past, but don't have the bandwidth to work on it until HTTP/2 is done.
> The basics of the (now old, but still unimplemented) idea there are,
> however, that everything is a subset of peer-to-peer, and thus the large
> part of the innovation that should be done is in the policy about how the
> data is to be distributed in a potentially peer-to-peer network
> As an example, imagine a policy which could indicate that, for any
> arbitrary byterange of a resource first try from the origin, then the local
> ISP supernode, and if those fail, try from peers.
> In this imagined world/protocol, the SlashDot effect would be a thing of
> the past, even for those sites not using a CDN, since the more users there
> were of a site, the more peers there would be for the content.
> ... but this is waaaaay off topic now.
>
> -=R
>
>
> On Thu, Nov 14, 2013 at 11:48 PM, Bruce Perens <bruce@perens.com> wrote:
>
>>  On 11/14/2013 11:32 PM, Roberto Peon wrote:
>>
>> For 1,2: How is this not orthogonal to the rest of the discussion?
>> For 3: I'm assuming you mean because the data is encrypted. You can MITM
>> this.
>>
>>  Just to be sure we're all on the same page here (because it seems that
>> we're not):.
>>   As I understand it, the proposal is:
>>     For web activity on the "open internet", if the scheme is https,
>> attempt to use http/2 over an encrypted, authenticated channel.
>>      For web activity on the "open internet", if the scheme is http, use
>> http/1 over an unencrypted, plaintext channel.
>>     For activity on a private network: use any combination of
>> {authenticated, unauthenticated}{encrypted, unencrypted}{http2,http1} you
>> desire.
>>
>>  Is there an objection to this?
>>
>>  Yes. It's stating that the only possible use of unencrypted http must
>> be via http/1.1 . It's either assuming that we must support http/1.1
>> forever, in which case that perpetual support should be part of the http 2
>> specification; or it's assuming that http/1.1 will wither and that this
>> will eventually force everyone to use encrypted traffic always.
>>
>> Neither of these seem optimal. The proposal ignores the fact that the
>> vast majority of web traffic is immutable public content for which
>> encryption serves no real purpose, and that the transmission of this
>> content may benefit from innovations in http/2 other than encryption.
>>
>> By the way, when I really feel the need to encrypt something, I use the
>> one-time pad. Depending on anything else is optimistic.
>>
>
>
> _______________________________________________________________
> Michael Sweet, Senior Printing System Engineer, PWG Chair
>
>

Received on Friday, 15 November 2013 17:18:34 UTC