Re: Why the restriction on unauthenticated GET in CORS?

On 20 Jul 2012, at 18:59, Adam Barth wrote:

> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones <> wrote:
>> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth <> wrote:
>>> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones <> wrote:
>>>> So, this is a non-starter. Thanks for all the fish.
>>> That's why we have the current design.
>> Yes, i note the use of the word "current" and not "final".
>> Ethics are a starting point for designing technology responsibly. If
>> the goals can not be met for valid technological reasons then that it
>> a unfortunate outcome and one that should be avoided at all costs.
>> The costs of supporting legacy systems has real financial implications
>> notwithstanding an ethical ideology. If those costs become too great,
>> legacy systems loose their impenetrable pedestal.
>> The architectural impact of supporting for non-maintained legacy
>> systems is that web proxy intermediates are something we will all have
>> to live with.
> Welcome to the web.  We support legacy systems.  If you don't want to
> support legacy systems, you might not enjoy working on improving the
> web platform.

Of course, but you seem to want to support hidden legacy systems, that is systems none of us know about or can see. It is still a worth while inquiry to find out how many systems there are for which this is a problem, if any. That is:

  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered

Systems with a) are going to be very rare it seems to me, and the question would be whether they can't really move over to standard internal ip addresses. Perhaps IPV6 makes that easy.

It is not clear that anyone should bother with designs such as b) - that's bad practice anyway I would guess.

  Anything else?


> Adam

Social Web Architect

Received on Friday, 20 July 2012 18:58:57 UTC