- From: Nathan <nathan@webr3.org>
- Date: Tue, 01 Mar 2011 21:20:50 +0000
- To: Glenn Maynard <glenn@zewt.org>
- CC: Anne van Kesteren <annevk@opera.com>, WebApps WG <public-webapps@w3.org>
Glenn Maynard wrote: > On Tue, Mar 1, 2011 at 3:33 PM, Nathan <nathan@webr3.org> wrote: >> (rather than controlled only "by user agents which choose to follow the specs" offering >> an artificial screen). > > If user agents deliberately ignore the specs to allow embedding where > authors don't want it to, they can do it with any model--Referer, > Origin, From-Origin, etc. They all depend on UA cooperation. > > In practice, as long as most browsers support it and enable it by > default, that's enough to discourage people from embedding resources > from sites that don't want them to. > >> However, on this specific draft, is there any chance you can move to a >> white-list/black-list model, where people can send either Allow-Origin or >> Deny-Origin, for instance in many scenarios I want to allow everyone except >> origins A and B who I know consistently "steal" bandwidth, or display my >> resources beside unsavoury ones. > > Sending whitelists in a header makes sense to me, but sending > blacklists with every request doesn't scale--such a list could easily > end up having dozens of entries, bloating the headers for every > request. You may not actually want to expose your entire blacklist to > the public, either. > > Blacklisting does seem like a fair use case, though; it often makes > sense to want to block particularly abusive sites, without blocking > everyone. yes, hence suggesting to offer both, let the resource owners manage how they want :)
Received on Tuesday, 1 March 2011 21:22:45 UTC