W3C home > Mailing lists > Public > semantic-web@w3.org > August 2014

Re: The ability to automatically upgrade a reference to HTTPS from HTTP

From: Tim Berners-Lee <timbl@w3.org>
Date: Sat, 23 Aug 2014 23:01:14 -0400
Cc: Public TAG List <www-tag@w3.org>, SW-forum Web <semantic-web@w3.org>
Message-Id: <4D8494AC-8D54-4242-ACB3-3711833C2684@w3.org>
To: Michael Brunnbauer <brunni@netestate.de>

On 2014-08 -23, at 20:32, Michael Brunnbauer <brunni@netestate.de> wrote:

> 
> Hello Tim,
> 
>> I'm not sure I understand your argument.
>> That's fine if they have the same content for http and https
> 
> [...]
> 
> So if an administrator has 10 HTTP/1.1 sites on the same IP and wants
> to add a https version of one of those sites, what does he do? Will he
> create a SSL version for every site in the configuration although all but
> one of them will be useless and lead to a certificate error? Of course not.

You are referring I think to the problem with HTTPS virtual hosting in general. With SSL and X.509 as originally designed, virtual hosting does not work. That is a general problem with HTTPS.  There are many reasons you can point to why using HTTPS is a pain.  But that is a separate issue.

(See e.g. http://www.crsr.net/Notes/Apache-HTTPS-virtual-host.html and
https://www.digitalocean.com/community/tutorials/how-to-set-up-multiple )-ssl-certificates-on-one-ip-with-apache-on-ubuntu-12-04   https://en.wikipedia.org/wiki/Server_Name_Indication  etc)

I wonder what stage SNI adoption is at.

You suggest that if clients try to just add a 's' to an existing URL, that because of the HTTPS virtual hosting problem, they will often find a random HTTPS server from another domain answering in fact, with untrusted cert, where the server admin has had no simple option but to configure it that way.
Now I understand your point I think.

Tim





Received on Sunday, 24 August 2014 03:01:23 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:38 UTC