Re: API Rate Limits and HTTP Code [#255]

On Sat, Nov 13, 2010 at 02:04:19PM +1300, Adrien de Croy wrote:
> GET /vi/WX1ABUY6iO8/default.jpg HTTP/1.1
> Host:
> Referer:
> Accept: */*
> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) 
> AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7
> Accept-Encoding: gzip,deflate,sdch
> Accept-Language: en-US,en;q=0.8
> Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
> Connection: Keep-Alive
> HTTP/1.1 200 OK
> Content-Type: text/html
> Pragma: no-cache
> Refresh: 1; URL=
> Connection: Close
> Chrome and FF barf on this and display a broken image.
> In this case, I think the intent is to give the ISP youtube cache 1s to 
> fetch the original so the re-request gets a local cached copy.  However, 
> given that you can effectively use this to tell a browser to re-request 
> after a delay, that's a possible option for rate limiting.  Except for 
> the fact that it doesn't seem to work.
> Is this Refresh header (which isn't in HTTP, so surely should be 
> X-Refresh?) legit?  Is it supported?

Well, it used to be silently supported in many browsers for a long
time, maybe since Netscape 2 or 3. I think it started with the HTML
"meta http-equiv" tags where we used to see it. I think that by
extension they supported it in HTTP headers.

I've made use of it in the stats page of haproxy because it's not a
critical feature, and it happens to work with at least FF and Chrome.
I seem to remember that MSIE supports it too. You can try here if you
want to check, I have just enabled it :

The difference is that I did not add the ";url=" part to the header,
it only contains the delay.

> As for why they do it this way rather than just acting as a caching 
> proxy is beyond me.  I guess it makes concurrent same-requests easier to 
> handle or something.

It's also possible that different servers are used for such fetches and
that they program a download which could not be linked to your existing


Received on Saturday, 13 November 2010 07:27:23 UTC