RE: Benjamin Carlyle http 2.0 expression of interest

Hi Anil,

> I didn’t understand your point 1. Are you saying that no one should be
> allowed to inspect messages if someone wants complete privacy and he has
> enabled end to end encryption?

I think it is quite nice someone from a web services background chose to
participate in the discussion. It's been a few years since I worked with
the EAI/ESB crowd, but to answer your questions:

1. Benjamin Carlyle is not writing about human to web site communication
but about machine to machine communication (m2m)

2. the intermediaries used in this kind of setup are not just caching
proxies (as squid) or aggregator/accelerator proxies (as haproxy) but
full-blown transformation proxies

3. machine to machine http dialog is broken in small atom-like messages

4. the transformation proxies used can perform validation (is the message
well-formed and unlikely to crash the target machine, has it been emitted
from an authorized system), re-routing (machine A thinks it is talking
with machine B but B has been retired last year, reroute to machine C
instead + load-balancing routing), transformation (the dialect used by
machine A is obsolete and won't be understood by machine B, unfortunately
A can't be upgraded for now so transform all its messages in something B
can understand)

Only the most confidential messages will pass through un-inspected in such
a setup (payload crypto), and even then their routing envelope will be
inspected and modified by the proxy to check they are authorized

Quite often (for financial exchanges for example) message signing is
desirable (even though the payload may be altered by the proxy and
resigned before leaving – the proxy still needs to make sure the original
message was legitimate)

The operations performed on message are already quite expensive, and it
can take quite a lot of them before something is displayed on the
human-facing presentation layer, so performance is paramount (to get an UI
that does not crawl and lags) and multiplying the encryption tunnels would
be prohibitive there. In fact to get a chance to work message passing must
be as optimized as possible with each step performing the minimal
necessary transform (for functional purposes) and adding as little
overhead as possible

Those systems are quite common in corporation back offices and the web
site UIs you see everyday are using scores of them without you noticing.

And yes one of the main use cases is definitely MITM transparent
interception and content alteration, as such systems often used to isolate
legacy systems from environment changes they don't understand (the legacy
system does not notice its environment has been replaced and it needs not
to be explicit because if you could change the legacy system you wouldn't
need a transformation proxy in the first place).

Other fun machine to machine http use cases (forgot to mention it before):
environment monitoring station deployed out of the grid (energy grid or
communication grid), that shuts down most of the year, wakes up at
specified intervals to collect data and send it via radio or satellite to
the mothership. Processing is limited to the energy solar cells can
collect in the meanwhile (even in the mid of winter with little sun plus
snow obstruction). Maintenance or battery change can only be done via
helicopter (when the weather permits) or after a few days of trecking
(because where there is no grid, there are no roads). So you'd better be
as parsimonious as possible with your energy uses.

For this particular use case the latency and processing induced by setting
up a crypto tunnel is a killer, payload signing (either from station to
mothership, or orders from mothership to station) is a much better fit,
and cutting down on the application layer (using common protocol and
transport middleware features, which exist already optimized due to large
use, instead of writing your own code without budget to optimise it
properly) is quite appealing.

Regards,

-- 
Nicolas Mailhot

Received on Monday, 23 July 2012 08:26:24 UTC