W3C home > Mailing lists > Public > public-xg-webid@w3.org > March 2011

RE: Improvements for the ontology

From: peter williams <home_pw@msn.com>
Date: Sun, 13 Mar 2011 12:21:41 -0700
Message-ID: <SNT143-ds105DE300133FAE738F7F5092CD0@phx.gbl>
To: "'Henry Story'" <henry.story@bblfish.net>, "'WebID XG'" <public-xg-webid@w3.org>
CC: <foaf-protocols@lists.foaf-project.org>
What I learned about semantic web designer culture is - and it's been a
pleasant experience - that one defines intelligent model (using a generic
scheme that may be a tad too intelligent for its own good, though). Even
though the formal model doesn't state the query, its structural and
functional relations imply the queries.


At the same time, 95% of web implementors don't care about that - anymore
than I care about the beautifully structured REST model for the Microsoft
azure ACS management service, all properly formulated with xsd schemas. Just
give me the damn query to run, for 80% of the use cases. Given me one more
variant query, for the second use case that accounts for the next 10% of all
adoption today. Yes, I also want to know that 10 more queries CAN be stated
(to address the future that is cloudy to me. Yes, I want my cake and I want
to eat it too.


Of course, we now know there are several use cases. And, I've learned,
different queries come about, tuned to those use cases. For example, in my
implementation concept, I was building a split validation process: the
resource server offloading certain "assurance responsibilities" to a highly
trusted data service (a trusted sparql server). As you say Henry, there is a
trust assumption there, not present in a other use cases that don't bother
with the split. 


But, of course the ontology logic doesn't qualify or preclude any such
implementation models, being the sum of several use cases and several
"anticipated" deployment variants -  like my variant involving "split"
validation duties. The spec or the logic doesn't go therefore to secure that
unique split, either;  with this being out and scope. Thus, the spec makes
no statement about how the 2 halves of the split would express trust in each
other or authenticate. It's not the spec's job to specify a working system,
but only state the "working models" from the logical and interworking
outlines. This is like X.509 spec, in which 20+ years later there are now
1000s of application of the common infrastructure, with camps warring over
non-format issues having moved onto "subtelties" like  hierarchy, graph,
self-signed, bridging models, and even validation-centric models that
diminish the centrality of the original issuing act by placing current
semantics on the validator in a split validation concept. The cert itself
and the authn handshakes in X.509 have really not  changed. in 25 years, and
a trillion emails on X.509 later. The debate is as vibrant as it ever was.
For example, a signed json cert is just a cert, with format swapped and the
new use cases enabled due to the ease of parsing.


So, how can the report and the spec express these broad notions? It needs to
characterize the bigger picture, while ensuring that 2 or 3 use cases become
real (with 20 more in the pipeline over the next 5-10 years). It has to talk
on web scale, while delivering convenience and utility. 


I suspect it has to tell an old-web story, and semantic web story, a sparql
story, a federated social network story, and a story that is browser centric
and then not. It cannot get too religious about any one, since those case
simply address that web that is. The right spec captures the web as it is,
not as it's supposed to be.


My gut tells me to look at the wars in the PKI space (where a billion
dollars in R&D was spent, and lots of unused research output is to be found
just laying about, discarded). For my part, being not too inventive, I
simply go pickup some of the trash and reuse it (im good at this, knowing
where all the trash piles are). This is what I did with the split validation
model, which is a minor variant of the Validation Authority concept, in PKI.
This said that the fourth  party (VA) speaking _today_ for the legitimacy of
a cert/assertion by re-signing it (in some sense) is much more important
that the third party issuer (CA) who minted it 2 years ago (under conditions
that have probably changed, meantime). But, a political bomb lies within
that concept, as perhaps the VA role/operator is not the same authority as
the CA role/operator. (CA's in reality actively conspired to derail the VA
model, for probably obvious reasons, in the usual world of competitive cut
and thrust of billion dollar corporations).


These are some of the notions I want to be able to express and read in the
spec - not that I know how. Though it MIGHT include that VA/CA split in 1 of
the 2 use cases that it highlights", what more important to me than that is
that it somehow conveys that lots of models will emerge - AND THAT IS
LEGITIMATE. We are anticipating a broad church, and here are 3 starter
religions. These "tuned models" (which I used to know by the term "standard
profiles") may well come down to tunings of the query.  This was my query
concept, as reduced to practice by Kingsley, sought to offload 100% to the
uriburner server by design _intent_, wanting that trusted party to perform
an pure existence test and specifically not return keys; which embodied
certain assurance tradeoffs I wanted to play with. 



From: Henry Story [mailto:henry.story@bblfish.net] 
Sent: Sunday, March 13, 2011 12:25 AM
To: WebID XG
Cc: foaf-protocols@lists.foaf-project.org; Peter Williams
Subject: Improvements for the ontology



On 13 Mar 2011, at 08:39, Peter Williams wrote on the foaf-protocols mailing

I spent the day building a new foaf+ssl type demonstrator. Unlike last year,
it now uses very simple .net4 code, and uses the azure Access Control
service to generate wrap tokens authorizing an http client to talk to a
trivial rest webservice - built from .NETs channel and service hosting
classes. It's actually just a 50 line modification of microsoft's teaching
code. Hosting it in the cloud is an exercise for tomorrow (once I swap
hosting containers); whether client certs work in that load-balanced,
firewalled cloud is quite another matter!


Very good.


In two command windows, for now, an https client and https exist, with
self-signed cert support both ways.. Opera can connect too, as a client
authn client. The service host exposes an extension point, into which I
inserted the client cert validation class (having checked it for webid, for
being self-signed, for being trusted locally by windows cert stores). And,
now, rather than use any RDF libraries natively, it simply calls uriburner
as a service - asking that service to test for the existence of named
pubkeys in the webid's document. It COULD authenticate to uriburner
(assuming uriburner adopted WRAP scheme for www-authorization, too!)


Trusting uriburner is a good way to get going.


"# DEFINE get:soft \"replace\"\r\nPREFIX cert:
<http://www.w3.org/ns/auth/cert# <http://www.w3.org/ns/auth/cert> >
\r\nPREFIX rsa: <http://www.w3.org/ns/auth/rsa#
<http://www.w3.org/ns/auth/rsa> > \r\nselect  ?webid \r\nFROM
<http://foaf.me/serverpeter34#me>\r\nWHERE {\r\n[] cert:identity ?webid
;\r\nrsa:modulus ?m ;\r\nrsa:public_exponent ?e .\r\n?m cert:hex
wncpirggpAomOcD2duZn0=\"^^xsd:string .\r\n?e cert:decimal


Our queries are now simpler than this. They could  be as simple as


PREFIX cert: <http://www.w3.org/ns/auth/cert#
<http://www.w3.org/ns/auth/cert> > .

PREFIX rsa: <http://www.w3.org/ns/auth/rsa# <http://www.w3.org/ns/auth/rsa>
> .

SELECT ?mod ?exp


   [] cert:identity ?webid;

      rsa:modulus ?mod;

      rsa:public_exponent ?exp . 



Modulus and exponent are Numeric literals, which means they can be encoded
in any number of ways, and so of course in base64 too.


 I can only urge (and I know Im going to be ignored) that we  change the
ontology to allow the mod and exp to be base64 encoded, optionally. The
query above is incorrect in that the values supposedly in formats cert:hex
and cert:dec are actually in cert:base64 (not that this exists.) It's just
that 2 billion PC with 5 years of legacy already support RSA pubkeys in that
encoding (I got the values from some xml-sig XML code, being produced from
some weird organization called W3C); and its nuts to make folks jump through
hoops to use otherwise. The base64 of mod/exp in the xmldsig key format is
all properly specified and tested with a 100 vendors already on board;
having properly sorted the translation of ASN.1 signed INTEGER to the base64


I am for adding a cert:base64 type in addition to cert:hex and cert:int .
The cert:hex is useful as it is what browsers and openssl tools show, that
makes it easy for users to look compare the html with what their browser
shows them. Base64 would be for large service as it would allow them to
reduce the size of publishing a key.



If you want mass adoption, quickly, keep things to 100 line delta to stuff
folks already have.


There are two other improvements I would like to suggest for the ontology.


1. A relation from the WebID to the key.  A public key is very close to

   a literal: it has 2 OWL2 keys which completely define it. So since it is 

   a general policy in rdf to have relations to literals and not from them,
this will 

   make other things possible. 


2. To move the rsa ontology to the cert ontology. When I started writing the

  I just did not know how complex these things may be, so I separated them.
But it turns out that rsa is very simple, and dsa would be so too.



queries would then look like this


PREFIX : <http://www.w3.org/ns/auth/cert# <http://www.w3.org/ns/auth/cert> >

SELECT ?mod ?exp


  ?webid :pubkey [  :rsaModulus ?mod;

                               :rsaExponent ?exp ] .




A simplification such as 1 has been on the books for a long time. I was just
waiting for us to have a more formal setting such as this one to be able to
go through this process.




foaf-protocols mailing list


Social Web Architect

Received on Sunday, 13 March 2011 19:22:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:42 UTC