W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2013

Re: WGLC p1: Tear-down

From: Willy Tarreau <w@1wt.eu>
Date: Tue, 30 Apr 2013 08:12:54 +0200
To: "Adrien W. de Croy" <adrien@qbik.com>
Cc: Mark Nottingham <mnot@mnot.net>, Zhong Yu <zhong.j.yu@gmail.com>, Ben Niven-Jenkins <ben@niven-jenkins.co.uk>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20130430061254.GE21517@1wt.eu>
Hi Adrien,

On Tue, Apr 30, 2013 at 02:52:49AM +0000, Adrien W. de Croy wrote:
> >> Do we need a way for a server to communicate which requests may be 
> >>made with impunity multiple times, and which should only be made once? 
> >>e.g. safe to retry or not. then only pipeline requests that are safe 
> >>to retry according to the server (rather than according to some 
> >>assumption or heuristic at the client, as such things are inevitably 
> >>wrong on occasion).
> >
> >That's built into the method of the request...
> that's what I meant by assume.
> UA authors might assume GET is idempotent.

UAs are the most well placed to know where they get the information they
send. I suspect that when the send a form using GET they don't trust
idempotence. However if a link has an embedded query-string, maybe they
consider the request idempotent as it's present in a link.

> It doesn't stop web 
> developers from writing sites that have significant side-effects on GET. 

We'll always get such things from clueless people, but that's also the
goal of the spec to insist on the risks of not respecting the standard.
If it's clearly written that GET/HEAD/PUT/DELETE are idempotent and that
browsers will consider this statement as true, then web developers will
have some guidance about the risk of doing stupid things.

> Getting these people to indicate safety of retrying is another problem. 

If they already use the wrong method and don't understand idempotence,
we can't expect them to advertise it correctly.

> I guess this is one reason why pipelining isn't that widespread yet.  
> Lots of problems with it.

No, it's really because many intermediaries and servers have had issues
with it, causing such requests to frequently stall or be dropped. It's
not always easy to get right, despite appearing obvious initially. I
recently managed to break it in haproxy without noticing until a user
reported some abnormal errors. Just to give you a rought idea, I believe
the issue was not caused by the request itself but by lack of space in
the response buffer when haproxy had to emit a redirect based on the
second request. Thus it forgot to wait for free space in the *response*
buffer to start to parse the *request* buffer... So it's easier to break
than to maintain in good shape.

Clearly, pipelining opens a new class of bugs, but there is no excuse
for not fixing them. If the spec provides some guidance on this, we'll
manage to slowly fix the web.

Received on Tuesday, 30 April 2013 06:13:24 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:12 UTC