- From: Mark Nottingham <mnot@mnot.net>
- Date: Sat, 29 Jan 2011 16:04:27 +1100
- To: Bryce Nesbitt <bnesbitt@bepress.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
It seems like you want the scope of the 503 on the Retry-After to be the entire server, correct? I.e., on a 200, its semantic is roughly "here's your answer, but don't ask me (the server) another question for n seconds." On 29/01/2011, at 6:19 AM, Bryce Nesbitt wrote: > Mark, > What you're missing is the proper use case. I agree with you for a > single resource -- max-age on the 503 is great for a resource that has > freshness associated. > > I'm talking about incremental retrieval of a large data set. Each > resource is new to the client, and will not be cacheable: > > GET /oai2.cgi?token=Az123Z&offset=0&count=100&query='Obama' > HTTP/1.1 200 > Retry-After: 10 > Cache-Control: no-store, no-cache, private > > GET /oai2.cgi?token=Az123Z&offset=100&count=100 > HTTP/1.1 503 Excessive queries from this IP address. Please respect > the Retry-After header. > Retry-After: 10 > Cache-Control: no-store, no-cache, private > > GET /oai2.cgi?token=Az123Z&offset=100&count=100 > HTTP/1.1 200 > Retry-After: 5 > Cache-Control: no-store, no-cache, private > > GET /oai2.cgi?token=Az123Z&offset=100&count=100 > HTTP/1.1 200 > Retry-After: 5 > Cache-Control: no-store, no-cache, private > > > On Tue, Jan 25, 2011 at 5:37 PM, Mark Nottingham <mnot@mnot.net> wrote: >> >> The effect of a max-age on a 503 will be to tell clients they can treat that response as authoritative for the stated number of seconds. I think this will have the desired effect -- i.e., clients that are aware of this will avoid making requests for the given period. >> >> The bonus, of course, is that caches are already widely deployed, so it's easy to leverage this in. >> >> You'll either need to send CC: private or an appropriate Vary header to assure that the 503 doesn't "spill" to other users of a shared cache. >> >> What am I missing? >> >> Cheers, >> >> >> On 22/01/2011, at 7:54 AM, Bryce Nesbitt wrote: >> >>> >>> Max-age is a different concept. >>> I'm controlling a robot that is sequentially retrieving data... say records 50,000-60,000 out of a data set of 500,000 elements. >>> >>> >>> On Thu, Nov 11, 2010 at 7:01 PM, Mark Nottingham <mnot@mnot.net> wrote: >>> Good to hear you're using 503 + Retry-After. >>> >>> Instead of setting Retry-After on 200, why not just set appropriate Cache-Control: max-age? >>> >>> Regards, >>> >>> >>> On 12/11/2010, at 9:41 AM, Bryce Nesbitt wrote: >>> >>>> Berkeley Electronic Press (http://www.bepress.com/) is using 503 plus retry_after to rate limit: >>>> >>>> out $cgiutils->header(-status=>"503 Excessive OAI queries from IP address ($ip)", -retry_after => $throttle); >>>> out $cgiutils->start_html(-title=>"OAI database busy"); >>>> >>>> But more importantly we're using (and this is an extension) a 200 error code for success, but also including retry_after: >>>> >>>> out $cgiutils->header(MIME_TYPE => 'text/xml', -retry_after => $throttle); >>>> >>>> The goal is to say "you got this reply, but if you ask back in less than 5 seconds, we're going to block you. It saves an entire server round trip. Going directly by the standard our clients would need to do: >>>> >>>> Request->200 >>>> Request->503 retry after 10 seconds >>>> Wait ten seconds >>>> Request->200 >>>> Request->503 retry after 10 seconds >>>> Wait ten seconds >>>> >>>> What would it take to make retry_after an officially list for 2xx? Then you'd get: >>>> >>>> Request->200 retry after 10 seconds >>>> Wait ten seconds >>>> Request->200 retry after 10 seconds >>>> Wait ten seconds >>>> Request->200 retry after 10 seconds >>>> Wait five seconds >>>> Request->503 retry after 5 seconds >>>> Wait five seconds >>>> Request->200 retry after 10 seconds >>>> Wait ten seconds >>>> >>>> We have this in production use, with a particular high volume client, who in fact halved the number of transactions hitting our server. >>>> >>>> We use 5xx because we want the client to come back later, not think the resource is gone. >>>> We added retry_after to 2xx codes reduce the number of server round trips, for clients that are crawling >>>> the heck out of us. >>>> >>>> >>>> On Thu, Oct 28, 2010 at 5:33 PM, Mark Nottingham <mnot@mnot.net> wrote: >>>> Hi Karl, >>>> >>>> That's a very timely question; I've had similar discussions privately with a number of folks, and have created an issue to track it: >>>> http://trac.tools.ietf.org/wg/httpbis/trac/ticket/255 >>>> >>>> My personal feeling is that we should clarify one of the existing status codes to include this use case, allowing space for someone to define an additional header or two giving more information. I'm happy to be argued into a new status code, but don't think it's necessary at this point. >>>> >>>> Based on the links below, it looks like common practice is to use either 403 or 503. >>>> >>>> The commonly cited case for 4xx codes (also explained in the blog you link to) is that it's a client error, not a server error. While I agree generally, it's important to keep in mind the effects on clients. 403 (and any 4xx error that a client doesn't recognise) will result in a conservative client believing that they're not allowed to resubmit the request, whereas 503 + Retry-After results in the correct client behaviour. >>>> >>>> That said, I'm more interested in coming to an agreement and assuring that existing software doesn't get broken than in coming up with "perfect" semantics. To that end, it looks like our options are: >>>> >>>> a) Clarify that 503 can be used for rate limiting, or >>>> >>>> b) Clarify that 403 can be used for rate limiting, and allow Retry-After to appear there, or >>>> >>>> c) Define a new status code (4xx or 5xx TBD). >>>> >>>> >>>> >>>> Some other folks doing similar things: >>>> >>>> Github uses 403: >>>> http://support.github.com/discussions/site/1151-api-returns-403-when-rate-limiting >>>> >>>> BaseCamp uses 503: >>>> http://developer.37signals.com/basecamp/ >>>> >>>> Google seems to use 404: >>>> http://groups.google.com/group/google-base-data-api/browse_thread/thread/cb5752b43b030fed >>>> >>>> Amazon appears to use 503: >>>> http://developer.amazonwebservices.com/connect/message.jspa?messageID=142896 >>>> >>>> Apache mod_limitipconn uses 503: >>>> http://dominia.org/djao/limit/contrib/sbarta/mod_limitipconn.c >>>> >>>> Lighttpd mod_evasive uses 403: >>>> http://redmine.lighttpd.net/projects/lighttpd/repository/entry/trunk/src/mod_evasive.c >>>> >>>> ... as does Apache mod_evasive: >>>> http://www.zdziarski.com/blog/?page_id=442 >>>> >>>> Nginx uses 503: >>>> http://wiki.nginx.org/NginxHttpLimitZoneModule >>>> >>>> Yahoo uses 403: >>>> http://developer.yahoo.com/search/rate.html >>>> >>>> I see references to Facebook using 341, e.g.,: >>>> http://forum.developers.facebook.net/viewtopic.php?pid=245966 >>>> >>>> Mozilla Weave doesn't specify a status code, but re-specifies Retry-After: >>>> https://wiki.mozilla.org/Labs/Weave/Sync/1.0/API#X-Weave-Backoff >>> >>> -- >>> Mark Nottingham http://www.mnot.net/ >>> >>> >>> >>> >>> >>> >>> >>> -- >>> Bryce Nesbitt >>> The Berkeley Electronic Press >>> bepress: sustainable scholarly publishing >> >> -- >> Mark Nottingham http://www.mnot.net/ >> >> >> >> > > > > -- > Bryce Nesbitt > The Berkeley Electronic Press > bepress: sustainable scholarly publishing -- Mark Nottingham http://www.mnot.net/
Received on Saturday, 29 January 2011 06:14:45 UTC