Cache Control - Http/Https - Semi private encrypted proxies caches.

Hi,

The one thing to keep in mind is that eventually, one would like to restore
the ability to have intermediate caches again, like one did with https days
for companies and ISP, instead of the fulling integrated edge caches,
control by central hosting body.

Back in the old days, public, meant that intermediate proxy could cache
stuff, however, with HTTPS, its means that the content is encrypted and
unless the proxy is set up as far the last time I ran them, one would have
to be a man in the middle issue a new certified SSL and that for the
website again to site in the middle. Which is one of the reasons that they
have to use DNS queries and pairing of IP traffic for categorising traffic,
why business can't block HTTPS visited sites anymore, that simply.

It would be nice, that a trusted registered group of proxy service could be
setup, whereby
each proxy service authorized by CA and is given signed approval for its
SSL communication by this authority.
Then basically communicate through a proxy the traffic is either
passthrough, because its already encrypted with direct negotiations with
the server, in which case proxy can't read anything.

However, what be nice, if certain parts of headers could be public
encrypted with a common proxy server negotiated SSL, so it can decode them
and cache the contents.

For some streams one may want it to be cachable, then the response headers
would be proxy public encrypted, while the body, could have a privately
encrypted version, that still required decryption to be directly retrieved
from the server over its own private SSL session.
Which means that steams can be publicly cachable again, however,
selectively the body could be decrypted. So semi private-public proxy
cache, so for a subset of subscription-based, the content could be
decrypted.

The case where is not semi-private encryption, the content would be stored
on the proxy service in encrypted form, which general decryption key can be
retrieved from the origin server and ssh verifications for the public cache.

The semi-private cache, would be required, that an identifier(jwt) be
placed in the response to uniquely identify a match to that type of
response on request, that client can decode it.
The client would have had to gotten reauthorization and included some token
in the requested for which the proxy would also look to match to this same
token in the above mentioned response(jwt)

Obviously, all caching times can be controlled, because of encryption and
signing can expire,
which means proxy attempt to cache a private stream for longer than a
certain period would be pointless.
So overriding caching times for certain streams that go beyond the
content-encryption expire would be pointless, as they would self
invalidate, to just take up unnecessary space for longer period than
nessary.

A fail-safe would be that, if proxy served stale content with invalidate
encryption key, then
both the existing service on non proxy private encryption and the proxy
will be notified.
The server has knowledge of mistake or attempted hacks and then other than
that proxy can purge its cache and also have logs of, where items are being
cached too long by mistake or intent, it knows about it.

Typically signing (jwt) could be used, that invalid signed token in the
request, would mean that the proxy doesn't attempt to send the response
anyway, because it knows your signing is invalid, it would just be wasted
bandwidth.

So really the ability to encrypted different sections with different
negotiated encryption would be great. 1 with server-side negotiates
encryption and the other with proxy. If there where multiple chained
proxies, then they would all need to negotiate SSL between themselves, but
SSL for public-proxy content headers would need to be common for exchange,
but not local cache storage.

Clients(Browser) could have the option, to have browsing and content full
private or semi-private, depending on what they are doing.
So someone still doesn't want anyone to know what they browsing or any of
the resource which could lead onto what they are browsing, they could use
for example incognito mode, where no proxies are used fully private to
servers.

However if you using Netflix, DSTV steaming, YouTube, then for most videos
you watching you really not going to care as long as you can get the stream
in the highest quality possible from the nearest cache that has the highest
bandwidth.

One of the other ideas I had 2 years ago for mobile and that, was the
ability for the last mile or base station to have remote programs(lamdas)
that could run and process audio and video stream that would dynamically
re-encode adapt the streams to the available bandwidth on the last mile,
while upstream would be notified, to adjusted, preventing the last mile,
from having to have a delayed feed, buffering, because of bandwidth
last-minute constraint.
Typically this would be more applicable to realtime data and streaming, not
robotics and surgery, as for that one would require realtime dynamic micro
bandwidth reservations on how traffic is interlaced on a pipe.
We would also still need require private QoS quality of services, for the
server to use sub Quality of Service that ISP can change and control and
infer,
which would allow buffer bloat and blocking packets of the same stream for
more realtime important data or events to be sent first. Sub or private
quality of service
would need to be a respected public peace of data with private checksum, to
detect and prevent overriding or re-writting.

Kind Regards,

Wesley Oliver

-- 
----
GitHub:https://github.com/wesleyolis
LinkedIn:https://www.linkedin.com/in/wesley-walter-anton-oliver-85466613b/
Blog/Website:https://sites.google.com/site/wiprogamming/Home
Skype: wezley_oliver
MSN messenger: wesley.olis@gmail.com

Received on Monday, 8 February 2021 09:27:58 UTC