W3C home > Mailing lists > Public > public-xg-webid@w3.org > March 2011

RE: Improvements for the ontology

From: Peter Williams <home_pw@msn.com>
Date: Mon, 14 Mar 2011 18:24:41 -0700
Message-ID: <SNT143-w19DC5AC2B79E8EE189D8DE92CF0@phx.gbl>
To: "public-xg-webid@w3.org" <public-xg-webid@w3.org>
CC: <foaf-protocols@lists.foaf-project.org>

I went a little further today coding my simple https client and https server; both simple command-line processes. In some senses, they are trivial versions of openssl's ssl client and ssl server tools. What is interesting is that all the code to do this is built into Windows, with no additional libraries. The core platform is stripped of all notions of PKI just delivering pure X.509 types and crypto -  as everything works fine with self-signed certs of chain length 1. The same Windows platform also obviously supports very much more advanced integrations -  like browsers - leveraging full power PKI, EV, and more advanced SSL use cases targeting hypermedia documents. Its excellent to see "just raw X.509" - delivered globally.
In what is realy quite trivial code, a simple https client enables the windows user to pick a cert to use for client authn, using the windows cert/keyset chooser dialog. Upon its presentation to the server due to https, the underlying SSL channels handling that (self-signed) client cert fire off my own validation class in the https server. Furthermore, the server's own (self-signed) cert invokes my own validation class in the https client. Obviously, that validation class should be implementing the steps of the validation agent - using some implementation strategem.
The last step is now to host this server in the Azure cloud. This involves indirectly hosting the webapp on IIS, in a VM, where the webapp is executing in an Azure-hosting container - behind the cloud's firewalls and load balancers etc. The aim should now be to understand if the client cert is indeed still presented to the webapp in that azure-centric hosting environment (and whether my validation class for client cert "received" events is still called). 
With that proven, its worth playing with more advanced ideas - exploiting the Azure service bus perhaps. This experiment would seek to tset that a pool of worker servers correctly support https with client certs. We can see what impact the service bus has on SSL sessionids, etc.

Subject: Improvements for the ontology
From: henry.story@bblfish.net
Date: Sun, 13 Mar 2011 09:25:20 +0100
CC: foaf-protocols@lists.foaf-project.org; home_pw@msn.com
To: public-xg-webid@w3.org

On 13 Mar 2011, at 08:39, Peter Williams wrote on the foaf-protocols mailing list:

I spent the day building a new foaf+ssl type demonstrator. Unlike last year, it now uses very simple .net4 code, and uses the azure Access Control service to generate wrap tokens authorizing an http client to talk to a trivial rest webservice - built from .NETs channel and service hosting classes. It’s actually just a 50 line modification of microsoft’s teaching code. Hosting it in the cloud is an exercise for tomorrow (once I swap hosting containers); whether client certs work in that load-balanced, firewalled cloud is quite another matter!

Received on Tuesday, 15 March 2011 01:25:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:42 UTC