W3C home > Mailing lists > Public > public-xg-webid@w3.org > February 2011

RE: qualified "reference" FYI

From: Peter Williams <home_pw@msn.com>
Date: Wed, 9 Feb 2011 20:26:37 -0800
Message-ID: <SNT143-w397497DB1E42D07B76D58492EC0@phx.gbl>
To: "public-xg-webid@w3.org" <public-xg-webid@w3.org>

ive been playing with an "enhanced" browser - one that delivers a signed SAML token over an client authn supported SSL handshake (citing the same cert, as cited by the signed SAML token).
 
Much as the QCR defined an http URI to denote a URL to a .crt file, so folks have defined
 
public const string Uri = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/uri;
 
as a way of naming /typing a claim value. obviously its value IS a value type of URI, and (i guess) it can be any legal URI, including IRI.
 
Would it be useful to define a claim type of webid, a specialized http://schemas.xmlsoap.org/ws/2005/05/identity/claims/uri that can be tested to be such using the webid protocol?
 
I want to be able to put a claim denote as such in a SAML token, that induces the token handler then to do the usual webid callback to the foaf agent, etc. If it doesnt claim to be that "quality" of URI, I won't.
 
 
 
 

 


From: home_pw@msn.com
CC: public-xg-webid@w3.org
Date: Tue, 8 Feb 2011 11:44:56 -0800
Subject: RE: qualified "reference" FYI
To: public-xg-webid@w3.org




take a look too at http://tools.ietf.org/html/draft-larmouth-oid-iri-04  
J larmouth is I assume John Larmouth - a name consistent with the material being discussed. (He is no amateur.)
 
lets remember that we are using certs, in which we already have types and values of the form t=v. We use them ourselves, in the SAN, at several levels.
 
a SAN is just a shorthand for t=>v, where t is a instance of an X.509 extension generic that declares a type, an v is the value of the type declared by the extension, which has a registered name. In general, the names are OIDs, as discussed  in the IETF draft. Though ASN.1 notation for relative names or extensions itself would typically be summarized as something like cn=peter, or SAN=[URI]http://..., these are just shorthands for the value notation something like <oid>=peter. For example (given http://www.alvestrand.no/objectid/2.5.4.3.html), 2.5.4.3="peter" in some ASN.1 value notation I just made up.
 
ok. Enough old ASN.1 theory that is years out of date, but bedevils us becuase (unless we define our own SSL cert-type, like the PGPers did) our SSL is tied to legacy certs.
 
we should also remember that one SAN field is an OID. If the OID is encoded per the I-D as a URI, then it goes in the URI field.
 
Lots of SAML types (in XML) were given identifiers, using URNs in general. This was a webyy variant of what folks have done for 30 years in ISO land, where types would be identifed by an OID.
 
It was you who suggested that the identifier for metadata such as types should be a real http URIs, rather than something like a URN or an OID (in a URI wrapper) - so that its easy to get to a machine readable of the spec (in RDF OWL, etc). If we want the semantic web to be real, it has to be an engineering tool; needing to accept some impurities (hey we may invent the transistor... by so doing!)
 
Since folks are beginning to write, we need some kind of basis that is solid (and using future-looking representations). Thats why Im looking at secure refs, per se, and standards and initiative we can reference. What is interesting to note from the ID I found is - and this is at first blush weird from a spec that seems to be about a rather inane numbering scheme topic - the security section (see below). From registered OIDS, in URI form, we rapidly got to both DNSsec and DNS.


"3.  Security considerations
3.1.  General
   An 'oid' IRI does not in itself pose a security threat.  However,
   care must be taken to properly interpret the data referenced by an
   'oid' IRI, to prevent that data from causing unintended access, and
   to avoid including data that should not be revealed in plain text.
   These security considerations are addressed by [ORS] through
   availability of DNSSEC in the resolution process, and optional return
   of encrypted data, with an established trust anchor."

 
From: home_pw@msn.com
To: henry.story@bblfish.net
CC: public-xg-webid@w3.org
Date: Tue, 8 Feb 2011 09:00:09 -0800
Subject: RE: qualified "reference" FYI




 There is a lot not  being said (becuase it forces issues into the the public space that some folks think be best "not" public - since they are generally intractable).
 
we are used to distinguishing between relying on a security services (such as the cert doing asymmetric key management security servicce) and using a security service. For example, one might use an expired cert to decrypt the message sent to you headed "bomb threat at 1pm". ONe cannot rely - a formal semantic - becuase its expired. Typically, such reliance is actually prohibited. Depending on the governance regime, use may also be prohibited.
 
The qualified cert concept is about regularizing this kind of issue, and the space of use semantics - of which use and reliance are only two. They define another: qualified [use/reliance]. the point is that its distinguished, with the qualifier "qualified X".
 
If one uses, cites, references, de-ferences a QCR - its a regulated act, subject to legal sanction if used outside the "german model" for example. The Austrailians seem to be the most zealous about this concepts of programme and its associated framework having tied it all up common law presumptions, much as Germans tied it up to the german law and legal tradition. What is good, relevant to  governance, is that is at least a ability for the "seccre reference" to tie into various legal regimes, of very different designs and cultures.
 
We cannot object to this. We have always said that a client cert CAN be third-party issued. So long as folks are not denied the self-signed cert by default (being articulated through EV forum, and its IETF-related PKIX vendors) I dont think there is any real change to the status quo. Qualified certs (and their references) just add to the fun.
 
> Subject: Re: qualified "reference" FYI
> From: henry.story@bblfish.net
> Date: Tue, 8 Feb 2011 17:47:47 +0100
> CC: public-xg-webid@w3.org
> To: home_pw@msn.com
> 
> 
> On 8 Feb 2011, at 17:34, Peter Williams wrote:
> 
> > http://www.nehta.gov.au/component/docman/doc_download/708-qualified-certificate-reference-v11-draft-2009-05-07
> 
> from the spec:
> 
> [[
> A QCR allows clients to obtain an X.509 certificate, which in
> turn will be used to secure messages, especially for Web services request and
> response.
> 
> [snip]
> 
> This document only covers identifying parties in NEHTA specifications that use
> the XML format to represent data. In particular, this includes data in NEHTA
> Web services specifications.
> ]]
> 
> The interesting thing is that they think of referring to PEM files, the weird thing is that they have a bunch of URLs for different protocol types it seems
> 
> http://ns.nehta.gov.au/Qcr/Ref/Http/1.0
> 
> is for certificate types which one can get using HTTP
> 
> http://www.healthcare.com.au/pki/clinic234.cer
> 
> and
> 
> http://ns.nehta.gov.au/Qcr/Ref/Ldap/1.0
> 
> is for certificates which one can get using ldap
> 
> ldap://ldap.healthcare.com.au:6666/cn=RP%20gp2%20org%20
> :2330726155,ou=
> RP%20gp2%20org,o=RP%20gp2%20org,l=TUGGERANONG,st=ACT,c=AU
> 
> This looks like over the top modelling to me, something that
> often happens - and in the semweb space too - to beginners.
> 
> Henry
> 
> 
> Social Web Architect
> http://bblfish.net/
> 

 		 	   		  
Received on Thursday, 10 February 2011 04:27:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:22 UTC