W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2006

Re: Extension methods & XMLHttpRequest

From: Jamie Lokier <jamie@shareable.org>
Date: Mon, 12 Jun 2006 12:15:41 +0100
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20060612111541.GA10244@mail.shareable.org>

Roy T. Fielding wrote:
> All cookies are non-secure.  Using them for security purposes (like
> access control) is just begging for security holes.

Yes and no.  They're less secure in the sense that they're not
typically stored as carefully.  They're more secure in the sense that
with cookies a site can remove access under the site's control by
removing access from a particular cookie value, e.g. to implement a
login timeout policy, explicit logout button, single-client access or
whatever the site's policy.

It's no coincidence that almost every site on the net uses cookies for
access control, rather than HTTP user/password.  It certainly is not
due to lack of security considerations.

> >    1. The browser does not post cookies with _any_ XMLHttpRequest.
> >       That's very unhelpful - they are useful.
> Sure, they are useful for poorly designed sites that expect to receive
> GET and POST requests.  They might even make sense for PUT and DELETE
> and a few other methods.  But arbitrary methods that the browser has
> no clue about the semantics?  Why would any client software want to
> send supposed "security credentials" like cookie on a method without
> knowing the semantics?

It's for the server to determine what constitutes a "credential", not
the client.

> >    2. The browser has a special rule for TRACE.
> Yes.

I agree :)

> >This assumes the browser did not recognise XMLHttpRequest + CONNECT as
> >different from other request methods.
> That isn't possible -- the syntax of the CONNECT request is different
> from that of other requests.  Any client that didn't understand it
> would send an invalid message that the proxy will reject.

Ah.  Point to you.

Valid CONNECT looks like:
    CONNECT some.domain HTTP/1.1

whereas valid GET/POST/etc look like:

    GET /some/path HTTP/1.1


    GET http://some.domain/some/path HTTP/1.1

There is no other difference in the requests - headers can be the same.

The presence of "/" somewhere in the request-URI makes all the difference.

Seems a tad delicate that security hinges on that, but it works.

> A browser should never execute arbitrary, unapproved methods, for
> the same reasons that a browser should not execute arbitrary code.

The entire _point_ of XMLHttpRequest is to support the browser
executing arbitrary code with unknown effects with limitations -
scripts supplied by a server, in a sandbox.

> >    Implementations which conform to this recommendation support at
> >    minimum GET and POST requests to the same domain without user
> >    intervention.  Cookies, Accept etc. will be sent with these
> >    requests as with ordinary page requests.  (Script authors can
> >    depend on this).
> No, POST requests are inherently unsafe.  They cannot be made without
> user intervention -- it violates all the other Web specs.

Specs: If anything, the necessary and common use of GET to send
state-changing messages to a resource from scripts is a violation of
HTTP specs.  I would have thought POST was the appropriate way to do
_all_ state-changing messages to a resource: they are messages sent to
the resource for processing after all, not messages to retrieve the

As RFC2616 says: GET: "retrieve whatever information (in the form of
an entity) is identified by the Request-URI", POST: "Providing a block
of data, such as the result of submitting a form, to a data-handling

Most web apps use GET to perform the latter function.  If POST
required user intervention per message, they would _have_ to use GET
to send state-changing messages asynchronously.  Maybe violating
specs, but it's the whole reason for XMLHttpRequest existing.

If by user intervention you mean something less intrusive like
"configure the browser to allow scripts to use POST", that would be a
rather misleading use of "user intervention".  Every browser feature
including "running scripts" and "allowing GET" is equally covered by

Security: POST requests are no more or less safe than GET requests,
when applied to a resource with known behaviour.  Since that's the
point of XMLHttpRequest (and the cross-domain restriction), no
problem.  They can and do both have insecure side effects when applied
to a resource with unknown behaviour.

> The spec should say why methods exist and
> that only known safe methods can be used without user intervention.
> (Intervention includes such things a specific configuration prior
> to running the application, not just pop-up boxes.)

It would be daft for the recommendation to say nothing at all about
what methods web site authors should expect to be commonly supported.

I think that would be ignoring reality.  The entire practical purpose
of XMLHttpRequest is to permit the deployment of applications with
minimal or no user configuration.

In particular as you raise the point of POST requiring "user
intervention", if there is really is a need for user intervention the
recommendation ought to give guidance on what that means.

> That is what HTTP and HTML already requires.  What it should not do
> is list a small set of methods and say implementations MUST (NOT)
> implement them -- that is none of your business and simply sets up
> the implementers to be fooled by unexpected extensions.

I agree.

-- Jamie
Received on Monday, 12 June 2006 11:16:03 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:39 UTC