W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1996

Re: REPOST (was: HTTP working group status & issues)

From: Koen Holtman <koen@win.tue.nl>
Date: Sat, 5 Oct 1996 20:28:37 +0200 (MET DST)
Message-Id: <199610051828.UAA05349@wsooti04.win.tue.nl>
To: "Roy T. Fielding" <fielding@liege.ICS.UCI.EDU>
Cc: koen@win.tue.nl, MACRIDES@sci.wfbr.edu, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Roy T. Fielding:
>  [Koen:]
>> If I understood his messages correctly, Roy has proposed several ways
>> to do this:
>> 
>> a. 303 responses.  This will work, but 303 responses add a RTT
>> penalty, so I don't think this is a solution.
>
>Given the scope of "the problem", I see no reason why a RTT penalty
>matters.

Correct me if I'm wrong, but I understood that search engines which
use non-ISO-8859-1 input are within the scope of the problem.  That is
a large enough scope to worry about RTTs.

>  Using 303 also results in better cache hits, which would
>offset the penalty in the rare cases that this is ever needed.
>
>> b. Putting 
>> 
>>   Content-Location: url_to_GET_the_previous_result_from 
>> 
>> in the response.  This would work _if_ the 1.1 spec would guarantee
>> that the URL supplied in the Content-Location header serves the
>> previous result of the POST request.  But I can find no such guarantee
>> in the spec.  Anyway, I don't like having to create new resources to
>> eliminate the `repost form data' popups.
>
>You don't "create new resources" -- they just exist.

OK, let me restate my objection: I don't like having to cause new
resources to exist.

>  The URL would be
>interpreted by the same mechanism that interpreted the original POST --
>the only difference being that the parameters are preselected and
>probably encoded.

You make it sound easy, but it is not.  I would have to write
encoders/decoders from POST request messages to URLs and back for this
stuff.  I would have to worry about the resulting URL being so large
that I run into client limitations.  I would have to cache large POST
requests at the server side so that I can return an URL that is short
enough.

Ugh.  No thanks.

The `Redo-safe: yes' solution can be had at the cost of a few lines of
code in user agents and a few lines of code in POST processing
scripts.  So this is the solution I prefer.

>[....]

>> c. Putting 
>> 
>>   Link: <http://site/that_resource>; rel=source
>> 
>> in the response.  I don't know if this will give the guarantee missing
>> for Content-Location, but in any case it not part of 1.1 and is a bit
>> too twisted for my taste.  And I still would have to create new
>> resources.
>
>It is more part of 1.1 than any other solution being discussed -- see the
>appendix.

If found `Link' in the appendix, but I found no mention of the
semantics of `rel=source' in any part of the 1.1 draft spec.  I could
not find anything in the HTML 2.0 spec either.  Where should I look
for a definition of `rel=source'?

Contrary to `rel=source', the terms `safe' and `idempotent' _are_
defined in 1.1.

>  More importantly, it is part of the original HTTP design and
>not just another midnight hack job.

c. may recycle an ancient header name, but this is irrelevant if it
does not solve the problem we want to solve.

>  Besides, it gives you something
>to put in the bookmarks file.

I can also put forms in the bookmark file.

 ----------------

To summarize: my main objection to

 Link: <http://site/that_resource>; rel=source

is that I can't encode arbitrarily large POST requests in an URL and
expect it to work.  So I must either keep state or live with a size
limit.  Both are bad.  

All other objections I mentioned above are secondary.

While rel=source may be useful for some other things, it does not
offer clean solution to the `repost form data' problem.

> ...Roy T. Fielding

Koen.
Received on Saturday, 5 October 1996 11:36:31 EDT

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:32:14 EDT