W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

Re: New issue: Need for an HTTP request method registry

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 07 Aug 2007 16:25:34 +1200
Message-ID: <46B7F43E.10605@qbik.com>
To: "Roy T. Fielding" <fielding@gbiv.com>
CC: Henrik Nordstrom <henrik@henriknordstrom.net>, HTTP Working Group <ietf-http-wg@w3.org>

Roy T. Fielding wrote:
> On Aug 6, 2007, at 4:59 PM, Adrien de Croy wrote:
>> TCP flags IMO are akin to HTTP methods.  They define the set of 
>> states and state transitions etc.  TCP options are more akin to HTTP 
>> headers.  They are modifiers rather than definers of state (if that 
>> makes any sense).
>> Adding methods and response codes to HTTP is like adding a new TCP 
>> flag.  Most people wouldn't even consider that an option, and 
>> wouldn't even start down that path.  It likely breaks any 
>> intermediaries as well as servers and clients
>> Adding new headers is like adding a new option to TCP (like SACK 
>> etc).  That is a lot more approachable, but still highly significant 
>> and onerous.  It is normally a lot less likely to break existing 
>> infrastructure.
>> Anyway, I guess not that useful a simile.  The point I was trying to 
>> make is we shouldn't be adding new methods and status codes unless 
>> there's absolutely no other way to do something, and that thing 
>> really needs to be done.  There's pretty much always another way.
> No, sorry, that is just wrong.  HTTP has been defined such that
> intermediaries do not need to know the meaning of methods, and hence
> can forward extension methods just as safely as they forward header
> fields.  The exception is when the intermediary is specifically
> configured to block unknown methods for local policy reasons, and it
> is for that reason that we *want* method-like extensions to be
> communicated as methods.  We want people to be able to control
> their own networks.
> If an intermediary is accidentally blocking unknown methods,
> then it is simply broken and should be replaced by something that
> actually implements HTTP.
OK, I think I understand your points.  I didn't pick it up from my 
reading of the spec though, and therefore perhaps naively assumed that a 
policy control device (being an intermediary interested in enabling 
administrative policy control) would by default block unknown methods, 
since it can't tell they are safe.  This is common practice in the 
security arena - block whatever you don't understand.  Putting new 
functionality into new methods rather than burying them in some XML 
payload would indeed make it easier to administratively block - thanks 
for that clarification.

The bigger issue then becomes education of system admins as to what 
these methods really mean, and what happens if they are blocked.  User 
education is expensive.

Also that of allowing a system to be configured to recognise valid 
methods (i.e. not just a hard-coded list of known methods at time of 

And what does it all mean for caching?

I imagine there are enough naive implementations of gateways out there 
that this is a significant issue. 

Another example of the perpetual tug of war between complexity and 
conformity I guess.

> New methods should be added when an extension signifies a new type
> of action (e.g., PATCH is better than PUT+Content-Range).  New status
> codes should be added when the response enables automatic handling by
> a client that is distinct from the existing codes.  If we let broken
> implementations determine the meaning of HTTP, then HTTP will slowly
> degrade as implementors make mistakes over time.
I agree we shouldn't let broken implementations define HTTP - that's the 
tail wagging the dog.  I do think we should however always keep in mind 
implementation difficulties when designing protocols.  Sometimes it 
appears that bug-free implementation has been considered a given where 
in fact it's non-trivial.  Unfortunately it's us fallible humans that 
keep implementing things, and mistakes will always happen, and we will 
always be lazy and try and implement as little of a spec as we 
absolutely need (whether or not we can truly perceive how much that 
is).  Humans seem to have a tendency to complexify things over time 
rather than simplify them - it's a common evolutionary process, 
especially with software (just look at Windows Vista compared to XP 
compared to 2000).  Consumers on the other hand wish things to be 
increasingly simplified. 



> ....Roy

Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Tuesday, 7 August 2007 04:25:14 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:31 UTC