Re: TAG work on SPDY

On 09/25/2011 06:12 PM, Noah Mendelsohn wrote:
>
> On 9/25/2011 11:33 AM, Jonathan Rees wrote:
>> Caching and security do seem to be at odds. A similar situation is
>> CSRF defense using nonces, which breaks cachability of HTML forms.
>>
>> There are probably architectural solutions to these conflicts. I don't
>> know what they are, although I could probably make something up.
>
> My concern with this is that SPDY is introduced primarily for
> performance, but as far as I can tell uses SSL unconditionally. Thus,
> even information that would otherwise not require protection might be
> less easily cached. So, there's a risk, I think, that the on-the-wire
> performance of a particular retrieval would in certain cases be traded
> for increased load on an origin server, and perhaps increased
> aggregate network traffic, as more copies are sourced directly from
> the origin.
>
> I should emphasize that I'm just speculating that there might be an
> impact on cacheing, and I'd be happy to hear from those who know more
> about SPDY that there is no problem.
>
>> Architecturally the important thing is the interface or contract, not
>> choice of implementation (protocol). To answer this question we'd need
>> to know what requirements HTTP meets, and then see if SPDY meets them
>> too. (I think I'm repeating Roy's "http: is not HTTP" slogan here.
>> According to the specs at least, http-scheme URIs aren't tied to the
>> HTTP protocol, and the HTTP protocol is not limited to http-scheme
>> URIs.)
>
> Long ago when the TAG was discussing my (as yet unsucessful) attempts
> to formalize the relationship between schemes and protocols, it was
> pointed out that the naming contract for http-scheme resources is
> somewhat different than for https. That is, when you retrieve an
> https-named resource part of the contract is that certificates will be
> checked as part of the name resolution. That's not in general true of
> otherwise similar http-named resources.
>
> So, if the switch were being done in the other direction we would be
> highly suspicious: resolving an https URI without checking certs is
> inapproriate. SPDY goes the other way, with at least some
> consideration to using SSL (and thus, I presume, certs?) for http
> URIs. This seems like a safer switch, but it really is a change of
> contract.  Today, access to my http-named resources cannot fail due to
> certificate problems, and it seems there's consideration of changing
> that.
>
> I'm not offering an early opinion as to whether that's bad, merely
> that it needs careful thought. Also: my impression is that SPDY is
> mostly used with https-named resources anyway, but I did see something
> somewhere about automatically switching to HTTPS even for http URIs.
>
Transient bufferbloat is a real headache.  I finally got a first cut at
it written up at posted it here:

http://www.ietf.org/id/draft-gettys-iw10-considered-harmful-00.txt

Note that under other circumstances, I might believe the IW10 change was
desirable; the current initial window isn't large enough to initially
saturate many edge connections.  But for now, it makes a bad problem a
lot worse in my view.  And one of the presumptions made with the IW10
change turns out to have been invalid, again due to bufferbloat: the
buffering is so excessive and growing with each generation of hardware
that there is no incentive *not* to continue the current arms race of
more connections.

Note that in retrospect, I'd have equally strong objections to web
browsers ignoring the RFC 2068/2616 restrictions on number of
simultaneous TCP connections: but that cat is already out of the bag as
are "sharded" web sites.

The basic problem is that we're sending large impulses of packets into a
network at line rates from a big data center, depending on buffering at
the receiving end to not drop the packets.  But this is causing
head-of-line blocking for other sessions and applications (including
VOIP, and RTCWEB applications).  There is typically only a single queue,
no classification, and no "fair" queuing going on.

I'm seeing as much as 100-150ms of transient latency on a 50Mbps Comcast
connection on certain web sites.  Note that this *by itself* means that
the problem, even on very high speed lines, puts latency/jitter at/above
VOIP standards (which have been 150ms for many decades).  You do the
math for other bandwidths...

Since most web objects are small, we've effectively destroyed TCP
congestion avoidance, and made a mockery of any TCP flow fairness (not
that I worship at that altar at all: that's why I said "fair" queuing
above).

Now, how do we get the web to be more "network friendly", while
simultaneously working on the shortcomings of the Internet edge?

We have two solutions:
    o deploy HTTP/1.1 pipelining in browsers.
    o deploy SPDY

In both cases, we avoid most of this disaster by vastly reducing the
number of TCP connections, improving congestion avoidance (though that
is suspect now due to the excessive buffering in broadband and your home
router) but primarily by reducing the packet impulses.

These are not mutually exclusive options: I personally feel both are
desirable in the current dismal situation.

On the HTTP/1.1 side:
    o widely deployed on most existing web sites and therefore can have
immediate effect.
    o caching friendly
    problems:
    o PITA to get to work reliably, due to broken web sites
    o lack of request #'s in responses, making out of order delivery
problematic.
    o poor man's multiplexing mostly sucks.

on the SPDY side:
    o significantly more efficient
    o better parallelism, due to a mux layer
    problems (currently):
    o caching hostile in its current incarnation
    o not widely deployed on servers, and to get it implemented on
production systems is going to take a long while

So deploying these two technologies have fundamentally different time
scales, since production system upgrades are much slower (we can presume
updating browsers goes much more rapidly).

I'm particularly concerned on the caching side on SPDY, on three grounds:
    o having worked on computing for the developing world, proxy caching
is fully *essential*, not a "nice to have".
    o I'd like to put a proxy in your local network environment: repair
of an object is much faster if it is to a local cache, rather than
end-to-end.
    o the current cert system in SSL/TLS is a disaster, as events have
shown, providing neither authentication nor privacy. How do we get this mess
fixed?  This is more than a SPDY issue, of course, but generic to the
current web, where authentication/privacy is tied to the conversation,
rather than the data itself, and to a completely hopeless certificate
system.

Ultimately, I'd like the replace HTTP entirely, with something like
CCNx; but that's speculative and much further out that either of the
above options.
                                    - Jim

Received on Monday, 26 September 2011 14:10:36 UTC