Re: Call for Adoption: Encrypted Content Encoding

On 1/12/2015 10:05 p.m., Walter H. wrote:
> On 01.12.2015 09:33, Julian Reschke wrote:
>> On 2015-12-01 09:22, Walter H. wrote:
>>> On 01.12.2015 01:19, Roland Zink wrote:
>>>> Am 01.12.2015 um 00:32 schrieb Jim Manico:
>>>>> > TLS is also end-to-end
>>>>>
>>>>> No way is that true. TLS per the standard can be MITM'ed by proxies
>>>>> in ways that subvert both certificate pinning and HSTS in ways that
>>>>> do NOT inform the user in any browser today. I'm happy to provide
>>>>> references to this if you like.
>>>>>
>>>> The browser will build a end to end TLS tunnel through known proxies.
>>>> Intercepting proxies may do MITM and
>>> and exact these intercepting proxies can't validate this content, if it
>>> contains malware or not;
>>> as I said, THIS DRAFT IS NONSENS;
>>
>> Are you promoting the concept of intercepting proxies that break up
>> HTTPS?
> not really, this concept will come more and more as a weapon againt
> paranoia that
> everything "must" be HTTPS

Intercepting proxies are not the only parties "breaking up" HTTPS.

TLS offloading is such routine practice that it has that defined term
"TLS offload" to name it by. This was going on at the CDN proxies end
long before MITM proxies were forced upon ISP by this "TLS everywhere"
craziness.

HTTPS relies on TLS. TLS is a point-to-point protocol, just like TCP.
Also just like TCP it can (and does) have multiple hops between user and
origin server.


>>
>>>> RFC7469 says it is allowed for clients to turn off pin validation
>>>> based on some policy and still be compliant. Is this what you want to
>>>> reference?
>>>>
>>>> However it is also compliant to not do this and do pin validation.
>>>>
>>> what does this help preventing clients receiving malware, that will be
>>> raised through this draft?
>>
>> Privacy and the ability for third-parties to inspect the contents
>> obviously are conflicting goals. Nothing new here.
>>
> and why DOES someone really want to promote something (THIS) to a
> standard, which
> raises these problems?
>> One reason for the use of encrypted content in a "blind cache" is to
>> allow caching of HTTPS content without having to mount a MITM attack. 
> this draft is in connection with HTTP not HTTPS

IMO there is no particular reason for that. When fetching these objects
from a blind cloud provider it makes no difference whether that service
is HTTP or HTTPS. The ultra paranoid would probably also prefer to fetch
these objects over HTTPS / TLS for the added header protection it
affords while going over the network.

Which I hope clarifies the distinction:

* HTTPS is about protecting against untrusted network.

* This draft is about protecting against untrusted origin server, or
unauthorized third-party download.

Three very different attack vectors.


As has already been mentioned in this thread encrypted .rar/.zip or
other formats are possible (and already actively used) for exactly this
style of server-based attack mitigation. This draft is just formalizing
an existing practice in a format more suitable for HTTP agents use.


>> And I believe that's a very good reason.
>>
> not  really,
> dynamic content is not cacheable

Not true. Some Squid installations have reported caching rates up to 80%
on ISP even though the current web traffic consists of more than 50%
dynamic content.

The biggest problem these days is recent browser releases sending
Cache-Control:max-age=0 unconditionally. Which adds latency for
revalidation even perfectly usable objects are already available.


> and static content - at these times
> rearely to find - needn't be HTTPS ...
> 

Amos

Received on Tuesday, 1 December 2015 13:31:29 UTC