More SPDY Related Questions..

Continuing my review of the SPDY draft... have a few questions relating to
SPDY and load balancers / reverse proxy set ups... The intent is not to
poke holes but to understand what the SPDY authors had in mind for these

1. Imagine two client applications (A and B) accessing an Origin (D) via a
Reverse Proxy (C). When a client accesses /index.html on Origin D, the
Origin automatically pushes static resources /foo.css, /images/a.jpg and
/video/a.mpg to the client.

Basic flow looks something like...

A                  RP                 O
|                   |                 |
|                   |                 |
|==================>|                 |
| 1)SYN             |                 |
|<==================|                 |
| 2)SYN_ACK         |                 |
|==================>|                 |
| 3)ACK             |                 |
|==================>|                 |
| 4)SYN_STREAM (1)  |                 |
|                   |================>|
|                   | 5) SYN          |
|                   |<================|
|                   | 6) SYN_ACK      |
|                   |================>|
|                   | 7) ACK          |
|                   |================>|
|                   | 8) SYN_STREAM(1)|
|                   |<================|--
|                   | 9) SYN_STREAM(2)| |
|                   |  uni=true       | |
|<==================|                 | |
| 10) SYN_STREAM(2) |                 | |
|  uni=true         |                 | | Content Push
|                   |<================| |
|                   | 11) SYN_REPLY(1)| |
|<==================|                 | |
| 12) SYN_REPLY(1)  |                 | |
|                   |                 | |
|                   |<================| |
|<==================| 13) DATA (2,fin)|--
| 14) DATA (2,fin)  |                 |
|                   |                 |
|                   |                 |

My question is: what does this picture look like if Client's A and B
concurrently request /index.html?

With HTTP/1.1, static resources can be pushed off to CDN's, stored in
caches, distributed around any number of places in order to improve overall
performance. Suppose /index.html is cached at the RP. Is the RP expected to
also cache the pushed content? Is the RP expected to keep track of the fact
that /foo.css, images/a.jpg and /video/a.mpg were pushed before and push
those automatically from it's own cache when it returns the cached instance
of /index.html? If not, when the caching proxy returns index.html from it's
cache, A and B will be forced to issue GETs for the static resources
defeating the purpose of pushing those resources in the first place.

In theory, we could introduce new Link rels in the same spirit as that tell
caches when to push cached content... e.g.

  Content-Type: image/jpeg
  Cache-Control: public
  Link: </index.html>; rel="cache-push-with"

What does cache validation look like for pushed content? E.g. what happens
if the cached /index.html is fresh and served from the cache but the
related pushed content also contained in the cache is stale?

I'm sure I can come up with many more questions, but it would appear to me
that server push in SPDY is, at least currently, fundamentally incompatible
with existing intermediate HTTP caches and RP's, which is definitely a
major concern.

As a side note, however, it does open up the possibility for a new type of
proxy that can be configured to automatically push static content on the
Origin's behalf... e.g. A SPDY Proxy that talks to a backend HTTP/1.1
server and learns that /images/foo.jpg is always served with /index.html so
automatically pushes it to the client. Such services would be beneficial in
general, but the apparent incompatibility with existing deployed
infrastructure is likely to significantly delay adoption. Unless, of
course, I'm missing something fundamental :-)

2. While we on the subject of Reverse Proxies... the SPDY spec currently

   When a SYN_STREAM and HEADERS frame which contains an
   Associated-To-Stream-ID is received, the client must
   not issue GET requests for the resource in the pushed
   stream, and instead wait for the pushed stream to arrive.

   Question is: Does this restriction apply to intermediaries like Reverse
Proxies? For instance, suppose the server is currently pushing a rather
large resource to client A and Client B comes along and sends a GET request
for that specific resource. Assume that the RP ends up routing both
requests to the same backend Origin server. A strict reading of the above
requirement means that the RP is required to block Client B's get request
until the push to Client A is completed. Further, the spec is not clear if
this restriction only applies for requests sent over the same TCP
connection. Meaning, a strict reading of this requirement means that even
if the RP opens a second connection to the Origin server, it is still
forbidden to forward Client B's GET request until Client A's push has been

- James

Received on Saturday, 21 July 2012 19:07:03 UTC