- From: Brad Hill <hillbrad@gmail.com>
- Date: Wed, 25 Feb 2015 18:00:15 +0000
- To: Tim Berners-Lee <timbl@w3.org>
- Cc: Anne van Kesteren <annevk@annevk.nl>, WebAppSec WG <public-webappsec@w3.org>
- Message-ID: <CAEeYn8jma1rwLAOX-12RAyqhRW11pF4L_U7M6R7xTo6RaFDYqQ@mail.gmail.com>
Thanks for the reply. URLs/URIs as data is indeed a tricky use case. The WebAppSec WG is about to publish a first public draft of the following (unofficial draft) : https://w3c.github.io/webappsec/specs/upgrade/ I hope it addresses some of these concerns, and presents an easier path forward by allowing resources to declare that they support and prefer secure connections, and clients able to handle such will transparently upgrade requests. In combination with another soon-to-be-published draft, for CSP Pinning, it should be possible to declare and have user agents remember such a policy for an entire domain and its subdomains. This should enable a great deal more sites to bring their existing content into a meaningfully secure context for users, which I hope is the direction we want to go, rather than creating new ways to downgrade the security context of applications with insecure legacy links to resources that do actually have secure representations available. I think any additional context and improvements that RDF and similar experts can provide on the suitability of this mechanism for their use cases would be very welcome. On Wed Feb 25 2015 at 8:04:01 AM Tim Berners-Lee <timbl@w3.org> wrote: > > On 2015-01 -05, at 17:55, Brad Hill <hillbrad@gmail.com> wrote: > > > > > > > On Mon Jan 05 2015 at 3:26:59 AM Tim Berners-Lee <timbl@w3.org> wrote: > > > >> > >> Data is special > >> > >> I am a web app developer, I need to be able to access any data. > >> I am happy to and indeed want to secure the scripts and HTML and CSS > which are part of my app. > >> I am happy to secure access to data which I control and serve. > >> I need to be able to access legacy insecure data like the think Linked > Open Data cloud (http://lod-cloud.net/). > >> > > > > Are there particular obstacles to the providers of this data making it > available over HTTPS or other reasons why we should assume that, over time, > they will not do so? > > Yes... a huge mass of interconnected mass of linked data in which the > terms, (the predicates and the classes ) are all URIs staring with "http:" > > This includes data which has been archived, examples in academic papers, > code which no-one is in a position to change. > > There is a lot of open data in CSB and JSON as well as RDF which is served > from "http:" only. > But the RDF case makes it very clear, so let us use RDF as an example. The > most fundamental predicate in RDF is rdf:type, which connects something and > its class. For example when you write in N3 ot Turtle > > <https://timbl.rww.io/foo#alice> a <http://xmlns.com/foaf/0.1/ > Person> . > > the language spec defines that the 'a' stands for < > http://www.w3.org/1999/02/22-rdf-syntax-ns#type> and so you meant > > <https://timbl.rww.io/foo#alice> <http://www.w3.org/1999/02/22- > rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . > > > That is simply not going to change without a massive amount of damage. > These URIs are used as identifiers not as addresses. As they should be. > <grumble>The folks who insisted on the term "URL" have a lot to answer for > here. These are identifiers, not locations. Don't change them. Change > the way you look them up gently with time.</grumble> > > It is imperative to upgrade what happens when you look up a "http:" URI, > and not require people to change to using "https:". > > > > > Are the providers of this data actually making an effort to make it > usable in client-side web platform mashups? (e.g. setting CORS headers?) > > > > Yes and no. There was a big push to get CORS headers added. > > http://enable-cors.org/ > > Most of the sites which are actively maintained added CORS. There are a > handful of holdouts, people we have not been able to reach or did not > bother or did not have the authority etc. > > I expect if we now ask people to roll out an upgrade of "http:" so that by > default Apache 2.n+1 has it on by default, also node, etc, then we will get > a reasonable uptake, but again some hold-outs. > > > > I went to http://lod-cloud.net/, picked the first resource listed on > the home page and loaded the example resource ( > http://data.linkededucation.org/resource/lak/conference/lak2013/paper/93) > . It is indeed not accessible over HTTPS, but neither does it return CORS > headers so would still require proxying or a native app for client-side > mashups. > > (Was CORS a mistake? It has certainly been a royal pain. Should we > instead have asked everyone whose data was protected implicitly by a > firewall to add headers instead, or only made the CORS rules apply within > NAT nets like 192.* etc?) > > Well, CORS is now a requirement for any public data. So, Brad your duty is > to call the guy up and tell him. Or someone has to. > > (Googling "CORS Everywhere I found https://gitlab.com/spenibus/ > cors-everywhere-firefox-addon/blob/master/readme.txt and laughed) > > > > It seems there is an educational outreach campaign needed to data > providers on best practices and necessary steps to enable their data to be > used in the web platform, so shouldn't that include making the data > available over HTTPS alongside setting an "Access-Control-Allow-Origin: *" > header? > > > > Well, the first thing is to fix browsers so if they find an ostensibly > secure origin loading insecure data, they just downgrade the origin to > being deemed insecure rather than blocking it. Change the UI so that the > it doesn't get the green happy certified look to it. > > Second thing is to roll out HTTP to HTTP/TLS over port 80 using connection > upgrade. > > That will take some time for the last 10% but for the those who are doing > update cycles it will be fairly quick. > > But it is less hassle and more than changing all the HTTP links. > > > > > -Brad Hill > > > >
Received on Wednesday, 25 February 2015 18:00:48 UTC