W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2007

Re: New issue: Need for an HTTP request method registry

From: Adrien de Croy <adrien@qbik.com>
Date: Fri, 10 Aug 2007 10:02:29 +1200
Message-ID: <46BB8EF5.6070501@qbik.com>
To: Jamie Lokier <jamie@shareable.org>
CC: HTTP Working Group <ietf-http-wg@w3.org>

Jamie Lokier wrote:
> Adrien de Croy wrote:
>>> In general, I think all methods should be allowed unless proven to be 
>>> a security problem.
>> I think there's a compelling argument to be made for denying all methods
>> unless proven (or at least strongly believed) to be safe.
>> Waiting for something to be proven unsafe isn't safe.  If I were MS, I
>> would definitely adopt the more cautious approach.
> And yet, I imagine all products allow POST, and with POST you can do
> anything at all over HTTP, if the client and server wish.
> Indeed, there are a few implementations which do tunnel arbitrary
> protocols over POST, to get around restrictions.  I can imagine myself
> coding a client which, when it detects that NEWMETHOD (or whatever)
> isn't working, it falls back to tunnelling the equivalent over POST,
> provided the server will understand it.  (Much like the cascade of
> methods we currently try in sequence to make certain web apps work
> everywhere.)
If I was thinking of deploying a new application over HTTP I wouldn't 
even consider for a minute using anything other than GET and POST in the 
first instance, rather than using it as a fall-back. It's not worth the 
angst of developing the new method, trying to get it standardised etc 
etc etc, finding out many intermediaries block it, and having to code a 
fall-back to GET/POST in the end anyway and then having 2 versions of 
the thing.  Much less hassle to just layer it over GET/POST in the first 
place, and I'm sure that's why it has been done that way more often than 
the other (I believe).

But that's the argument for using HTTP as a transport.  :)  There are of 
course other benefits of using HTTP as a transport for this - namely 
that you can then choose to layer your higher-level protocol over other 
protocols as well, rather than being chained eternally to HTTP.

> Why is that allowed?  It's not meant to be a provocative question, but
> hoping for some thought as to why POST to a server is ok, but some
> arbitrary new method is not.
human nature - familiarity breeds contempt and all that.  People writing 
these things are familiar with GET/POST, and whats more nothing much 
works without it.  Lots of things still work without other rarely-used 
methods.  Putting out a product without GET and POST support would be 
like trying to sell a car with only 1 wheel and no engine.  Putting out 
a product without some of the other methods would be more like trying to 
sell a car with only a 4 speaker sound-system instead of 6.

I'm sure people don't consider POST to be entirely safe, but they know 
they don't even have a product if they don't support it.

Also, POST doesn't always work either.  Especially with NTLM.

And I know NTLM is "broken" and "unsupported", and all sorts of other 
violations of the spec, but on the other hand millions and millions of 
people have to use it.  To use digest on a windows platform you can't 
auth against the windows or AD user database unless you re-write that 
database (since there's no conversion between one way hashes).  I can't 
see MS doing that when they can and have just kludged NTLM into HTTP.  
Is the fact that they had to kludge it in without support an indication 
of a failing in HTTP?

> -- Jamie

Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Thursday, 9 August 2007 22:02:12 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:31 UTC