W3C home > Mailing lists > Public > public-appformats@w3.org > January 2008

Re: Examining the 'no server modification' requirement

From: Brad Porter <bwporter@yahoo.com>
Date: Thu, 10 Jan 2008 21:46:10 -0800 (PST)
To: Mark Nottingham <mnot@yahoo-inc.com>, "WAF WG \(public\)" <public-appformats@w3.org>
Message-ID: <752505.1142.qm@web53504.mail.re2.yahoo.com>
The no-server-modification requirement originally arose in the voice browser working group due to the practical experience of the participant companies that in many large IT-based companies, the website environment is a shared environment that supports segmented ownership of documents and content, but not segmented ownership of website configuration.  Given this is a per-resource policy, it made sense to associate the policy meta-data directly with the resource.  This is why the original NOTE only focused on the Processing Instruction and did not include the HTTP headers.  

Further, there are a number of cases that voice browser working group participants identified where the resources are static and can be properly cached.  Proper HTTP caching was heavily used for static-only content in the voice browser realm given the tighter response time requirements expected on the phone.  Therefore, requiring server validation would potentially require sites to invoke a dynamic pathway for static XML data, or worse eliminate effective caching altogether.  

The alternative server-based security model hadn't been proposed at that time, and the prevailing assumption in 2005/2006 was that the browser was responsible for the sandboxing, and any modification to same-origin was a modification to browser sandboxing and therefore should be addressed entirely on the browser-side.  In my observation, trends in the security space and the greater prevalence of fraudulent behavior on the web has lead to a greater emphasis on protection at the server rather than the client in the past 2 years.  Site-keys, captcha, etc being good examples.  

I think the other correlated requirement was that the existing assumptions about browser sandboxing remain valid.  Particularly, that if the server does nothing it can assume that the same-origin policy is still protecting those resources from improper access by a third-party application executing on a browser inside a protected network zone.  

I wonder to some extent if this entire debate could be addressed by including functionality in the access-control specification that would allow the server to also perform the validation if it choose?   A solution where both the browser and the server are enforcing the policy may ultimately be the strongest.  This would enable webmasters to feel like they have some control, but also prevent the browser vendors for being blamed when webservers accidentally expose all their data by improperly implementing the server-side gate.


Mark Nottingham <mnot@yahoo-inc.com> wrote: 
I'd like to take a closer look at the 'no server modification'  
requirement, because the more that I look at it, the more I don't  
understand why it should drive the architecture here.

As I understand it, the motivation for this is where a publisher wants  
to make data available cross-site, but doesn't have access to the  
server except to upload files; they cannot modify server configuration.

Looking around at the current Web hosting landscape, this is a pretty  
small set of resources. Consider;
    - Almost every commodity Web hosting provider (e.g., pair.com,  
dreamhost, 1&1, etc.) allows shell access and .htaccess modifications  
for mod_rewrite, as well as scripting, for a very small amount of money.
    - Those that don't offer shell access often still provide  
scripting (e.g., Perl, PHP). Both Moveable Type and Wordpress can be  
deployed using FTP only.
    - Even University accounts offer shell access (that's the point of  
a university account, after all), and usually offer .htaccess.

So, AFAICT, the only use case that really drives this requirement is  
for free shared hosting providers, a la GeoCities. It's not an  
adoption argument; as mentioned, MT and WP can both be deployed with  
just FTP, and their need to run an executable hasn't hindered  
deployment at all.

I'm struggling to imagine a situation where someone would want to host  
cross-site data on this type of site, being unable to do it elsewhere.  
What am I missing?

Also, from a policy standpoint, I do wonder if a site that doesn't  
allow modification of server configuration, shell access, etc. would  
want to allow cross-site requests to the things they host; AIUI most  
of these services spend considerable resources working to assure that  
their accounts aren't used for "bandwidth stealing" by already  
checking referers, etc.

Any illumination would be most appreciated.


Mark Nottingham       mnot@yahoo-inc.com
Received on Friday, 11 January 2008 05:46:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:56:21 UTC