Re: ISSUE-20: Client and Server model [Access Control]

At the risk of extending a discussion I haven't been tracking that closely, I want to address Tyler and Doug's comments directly. 
 
 Tyler wrote:
    
I see no advantage to placing this logic in the client, as opposed to the server. Placing the logic in the client introduces significant complexity which creates many opportunities for implementation bugs, specification ambiguity and misunderstanding by web application developers, while possibly limiting the kinds of policies a server can enforce.

  Doug wrote:
 
    
Ideally, the server should be responsible for determining how it dispenses its
data. Unfortunately, the Same Origin Policy has in many cases induced the
abdication of the server's responsibility. The current proposal extends this bad
practice. The server sends a policy statement with the data to the browser. The
browser must interpret the policy statement and decide whether or not to deliver
the data to the application. I think this is perverse. The server should not be
putting bits on the wire that it does not want delivered. The proposal
encourages bad practice.

 I believe this proposal is flipping around the trust-model established by web browsers operating in a sandbox.

The concept (which may have gotten a bit lost in all the intro-text rewrites) is that the browser operates in a sandbox.  Because the web browser runs on a personal desktop, it has access to resources that the application running in the browser should not have access to.  These include everything from local documents, to system CPU time and usage, to memory tables, to network ports.  

The user (and more appropriately, the corporate IT department) trusts that the browser enforces that sandbox.  In particular, the corporate IT department wants to protect corporate IP and documents such as those living on disk, in network filesystems, and on intranet HTTP servers.

You could require the browser to provide verifiable HTTP_REFERER information and then have the server restrict documents that way.  Alternatively you could require that users have authenticated tokens before accessing any document.  If either of those existed, then you could "make the web secure for cross-domain access" if every server accessible from any web browser did enforcement properly.  

I like the purity of that model and generally believe in a world with less-and-less trust, the servers need to isolate themselves, however it isn't pragmatic and would require a fundamental shift in the way web security is understood to work.

In particular, moving to server-based access-control requires:

a) browsers to provide verifiable REFERER, unique user, or other equivalent identity information
b) every server to properly validate and update its validation list all the time
c) reconceiving the web as a unification of "closed" networks that grant access

The beauty of the browser sandboxing model is that it has allowed the "web" of servers in the world to remain largely wide-open.  The unfortunate sacrifice is that servers that want to be even more promiscuous and share their data more widely for cross-site requests are restricted.

This specification does not attempt to change the browser sandboxing model or require existing servers be more restrictive.  It preserves the same assumptions that webserver administrators have come to expect, but enables a very potent use-case if the server administrator or document author chooses to be even more promiscuous with its data.

--Brad
 
--0-1993202116-1199831918=:28821
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: 8bit

At the risk of extending a discussion I haven't been tracking that closely, I want to address Tyler and Doug's comments directly. <br> <br> Tyler wrote:<br> <blockquote type="cite">   <pre id="body">I see no advantage to placing this logic in the client, as opposed to the server. Placing the logic in the client introduces significant complexity which creates many opportunities for implementation bugs, specification ambiguity and misunderstanding by web application developers, while possibly limiting the kinds of policies a server can enforce.<br></pre> </blockquote> Doug wrote:<br> <br> <blockquote type="cite">   <pre id="body">Ideally, the server should be responsible for determining how it dispenses its<br>data. Unfortunately, the Same Origin Policy has in many cases induced the<br>abdication of the server's responsibility. The current proposal extends this bad<br>practice. The server sends a policy statement with the data to the browser. The<br>browser must interpret the
 policy statement and decide whether or not to deliver<br>the data to the application. I think this is perverse. The server should not be<br>putting bits on the wire that it does not want delivered. The proposal<br>encourages bad practice.<br></pre> </blockquote>I believe this proposal is flipping around the trust-model established by web browsers operating in a sandbox.<br><br>The concept (which may have gotten a bit lost in all the intro-text rewrites) is that the browser operates in a sandbox.&nbsp; Because the web browser runs on a personal desktop, it has access to resources that the application running in the browser should not have access to.&nbsp; These include everything from local documents, to system CPU time and usage, to memory tables, to network ports.&nbsp; <br><br>The user (and more appropriately, the corporate IT department) trusts that the browser enforces that sandbox.&nbsp; In particular, the corporate IT department wants to protect corporate IP and
 documents such as those living on disk, in network filesystems, and on intranet HTTP servers.<br><br>You could require the browser to provide verifiable HTTP_REFERER information and then have the server restrict documents that way.&nbsp; Alternatively you could require that users have authenticated tokens before accessing any document.&nbsp; If either of those existed, then you could "make the web secure for cross-domain access" if every server accessible from any web browser did enforcement properly.&nbsp; <br><br>I like the purity of that model and generally believe in a world with less-and-less trust, the servers need to isolate themselves, however it isn't pragmatic and would require a fundamental shift in the way web security is understood to work.<br><br>In particular, moving to server-based access-control requires:<br><br>a) browsers to provide verifiable REFERER, unique user, or other equivalent identity information<br>b) every server to properly validate and
 update its validation list all the time<br>c) reconceiving the web as a unification of "closed" networks that grant access<br><br>The beauty of the browser sandboxing model is that it has allowed the "web" of servers in the world to remain largely wide-open.&nbsp; The unfortunate sacrifice is that servers that want to be even more promiscuous and share their data more widely for cross-site requests are restricted.<br><br>This specification does not attempt to change the browser sandboxing model or require existing servers be more restrictive.&nbsp; It preserves the same assumptions that webserver administrators have come to expect, but enables a very potent use-case if the server administrator or document author chooses to be even more promiscuous with its data.<br><br>--Brad<br> 
--0-1993202116-1199831918=:28821--

Received on Wednesday, 9 January 2008 14:38:21 UTC