[whatwg] ContextAgnosticXmlHttpRequest: an informal RFC

> In any case they couldn't if they wanted to, as the referrer could be
> anything -- for example, it could be a hotmail.msn.com referrer if an
> employee is chatting to another employee using his hotmail account and
> follows a link from such a mail to the other employee's (confidential)
> documents inside the intranet.

Well, the value of the Referrer header i'm talking about in this case,
would always be the URI of the document originating the
ContextAgnosticXmlHttpRequest, NOT the *document*'s referrer. Based on
this requirement, i should be able to rely on this header to protect
my service. But I agree that existing firewalled syndication services
without referrer-checking in place would still be vulnerable, and we'd
be introducing a significant risk.

Point taken about text/xml also being valid XHTML MIME type, woops. 

How about requiring from a service that it sets an extra HTTP header
to offer its content to "foreign" hosts:

X-Allow-Foreign-Host: All | None | .someforeigndomain.com |
.somehost.someforeigndomain.com

Unless this HTTP header is sent as part of the HTTP response, the User
Agent would "crap out" with some generic, obscure "data retrieval
failure" message (per what you mentioned). If eBay suddently wanted to
expose their REST API to ContextAgnosticHttpRequests, they could
enhance their service to send out an additional X-Allow-Foreign-Host
header. Today's intranet XML/HTTP services should still be reasonably
safe, as none of them would be sending this header, therefore all
attempts to load them from foreign documents would obscurely crap out.

all this, i believe, tends to bleed into your own idea of establishing
some sort of trust relationship. To that end, I need to spend more
time  grokking 11.4 from your document. I think I'm getting there.

 And yes i did have a mild brain malfunction with that reserved port
idea in the first place regarding firewalling, woops#2

>> 2) ensuring its validity as a pure parsable XML document

>Not sure what you mean by this. If you mean "The XML document must be
>well-formed before it can be parsed into a DOM" then that's true already.

I wasn't familiar enough with all flavors of implementations of
XmlHttpRequest to be 100% sure that broken/invalid XML could
absolutely not, in any way, be retrieved as, say, a good ol' string,
even if a DOM couldn't be hacked out of it. I was basically trying to
further limit the types of documents you could ever retrieve, to
purely valid XML documents, so no random text or Tag Soup HTML
document could be arbitrarily leeched.



-chris


-- 
Chris Holland
http://chrisholland.blogspot.com/

Received on Tuesday, 8 March 2005 19:09:43 UTC