- From: Peter Williams <home_pw@msn.com>
- Date: Tue, 28 Jun 2011 14:39:01 -0700
- To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>
- Message-ID: <snt143-w64775A560E090A04F7E96892560@phx.gbl>
So I half believe in webid as a IDP service - in which the IDP performs the steps of the spec. It then asserts, having reduced a foaf card to set of claims, in some attribute naming model. This model is a variation of the GRID work done in Internet2 for 5+ plus (convert client cert into assertion, to overcome interfering https proxies). I dont believe in the proxy certs model though - as technically fascinating as it is/was - but but because as it totally failed in the Grid world where it was properly funded and had lots of political support. I do believe in the cascaded IDP service model, in which one IDP assert to a bridge that recasts claims from one naming regime to another, to better suit the target resource server. do use one of the standard assertion formats. Dont make a custom profile of it. A good test is that if you use openid or ws-fedp that it works with Microsoft ACS as the assertion consuming party. if y ou choose SAML2 (now commodity in windows!), ensure it works with ADFS as the assertion consuming engine. These products (ACS and ADFS) are "final stage" products, way post-research phase, entering the market at the commodization point defined as one that maximizes interoperability. if you can inter with them, you stand a good change of inteworking with the vast majority of other vendor's equivalent implementations. If you make your own assertion blob format, be prepared to duke it out with all the other last mile integration kits for web frameworks (all doing the same thing... in general). Date: Tue, 28 Jun 2011 19:25:34 +0100
From: kidehen@openlinksw.com
To: public-xg-webid@w3.org
Subject: Re: [foaf-protocols] WebID test suite
On 6/28/11 7:07 PM, Peter Williams wrote:
The way I see it is we keep arguing this position - and its
simply that ldap is the flash point (rather than anything
important). Having argued the need to deal with the web as it
is, not the web as it should be, we find the group goes back -
by ones means or another 0 to the assumptions of the linked data
movement (at the next meeting of the linked data types).
My problem is I no longer believe this group will have any
impact this year (or next) - becuase I see research thinking.
Its fine as a research project; but there are 10 of those to
follow.
I don't know about the group, but we are planning impact right now
let alone this year.
I have a reply to Henry in my outbox. I just need the verify the
live service during my working vacation. Basically the is a live
service, I just need to give it my blessing post final tests.
Henry stayed tuned. Please note, we made GRDDL implementation years
ago as part of our middleware offering, as I continue to state,
these are just approaches to middleware. Not being GRDDL doesn't
negate anything in the middleware game with protocols, data access,
and data representation.
Stay tuned for when I get back from my walk and dinner :-)
Kingsley
Date: Tue, 28 Jun 2011 10:18:44 +0100
From: kidehen@openlinksw.com
To: public-xg-webid@w3.org
Subject: Re: [foaf-protocols] WebID test suite
On 6/28/11 6:53 AM, Henry Story wrote:
On 28 Jun 2011, at 03:10, Peter Williams wrote:
I think you keep ignoring the fact that from time
eternal browsers have had ldap clients built in,
using LDAP URLs.
I don't ignore it. I even mentioned the ldap url as
being a possibility for a WebId.
Not a possibility, unless you are truly ignoring the fact that
we already support ldap: scheme URIs (as SAN placed WebIDs) in
our implementation of WebID. As I keep on saying: URIs are
sacrosanct. An IdP is the one to decide which schemes it can
handle as part of its implementation of the WebID protocol.
The issue is not ldap. its the fact that
directories, whether ifs foaf cards, vcards,
micro-formats, or any other projection of the
directory record stuggle, becuase the security
model was not a good social fit. Im convinced
websso has got the the heart of that fit problem.
And, thus, as you assert, ldap becomes an
"attribute source", no different to sql or a foaf
card.
Yes, people don't want to open their ldap directories
to anyone without protection. But they can only open
them globally if they have something like WebID, and if
they have a data format that allows for global
linkability.
Yes, and that's achievable and implemented by us already.
Ldap started off in the 1980 before the web, and was
extended without ever fixing these problems, which of
course are difficult to fix. The Web was designed as a
hyperdocument platform from the begninning.
Yes, so you can transform data to many representations once
its clear that the base schema is really conceptual rather
than syntactic. Basically, logic delivers the conceptual
schema.
Now, what is intresting is that we keep expecting
foaf cards (which are just serialized directory
records, using a non-LDIF format) to find a fit,
somehow addressing what failed in the ldap world.
Foaf is based on RDF, which is designed for Linked Data
(hyperdata) scenarios.
Of course Ldap can participate too, but it would to
need to give a clear mapping into the semweb, ie to give
semantics so that users from one ldap system can
communicate clearly - and without prior agreement on
vocabulary - with another ldap system. But as I don't
think this is done yet, I think we can skip ldap as a
priority for the moment.
The spec just has to be agnostic re. URI schemes. The support
of any scheme re. WebID is an implementation matter for an IdP
that supports the WebID protocol. That's really it. URIs are
sacrosanct. Inherently agnostic.
If you find some big ldap vendors who really want to
join, then the W3C may be happy to help them semwebise
the ldap system, and perhaps ldap urls will combine
nicely and often with http and https urls. But my guess
is that you will end up with huge resistance there in
the ldap world: there will just be too many new things
to explain to people. Unless it is shown to work clearly
in the most natural platform - the web - they won't take
it on.
We'll be taking our implementation to them :-)
And after all who cares whether it is ldap or http
that is the transport protocol? Certainly not the
business people who would finance this.
See my earlier comment.
Anyway what has this got to do with the WebID Test
suite again? Please try to keep the posts on topic.
Well you'll see that ldap: based WebIDs work with our
implementation :-)
Kingsley
Henry
This worries me.
From: henry.story@bblfish.net
Date: Sun, 26 Jun 2011 18:43:24 +0200
CC: demoss.matt@gmail.com; public-xg-webid@w3.org
To: home_pw@msn.com
Subject: Re: [foaf-protocols] WebID test suite
On 26 Jun 2011, at 17:23, Peter Williams
wrote:
The X.509 standard worked worldwide -
albeit mostly amongst universities. It
was probably bigger than is the Shib
world, even today. This seems to have
been before Henry's time (he likes to
tell the story that ldap/dap was never
web scale, not realizing perhaps that
the first directories "on the web"
were http -> ldap -> dap
gateways...).
The point is the protocol was not made
available directly on the web, in such a way
that it could be interoperable directly as
ldap. For example TCP/IP works at web scale,
so does SMTP which is broken, but ldap is
used a bit like SQL databases as a back end.
There are logical reasons in the case of
LDAP and of SQL for this. But I think you
keep ignoring them: the URL.
Today, of course, there
are a few 10s of million AD
installations, that we can expect to
start connecting up quite shortly, now
SAML->AD gateways are going
mainstream. What folks refused to do
(federate and publish directories),
folks seem more willing to do when
SAML claims project said directories
to a limited network of consuming
sites.
Perhaps SAML has more of a chance, it
uses a few web technologies: XML and
namespaces for one. They even started
working on a RESTful variant I heard. I am
not a specialist of it.
X.500 also had both simple and strong
authentication, and the usual user,
consumer (SP) and IDP model. Both
could use signed operations between
the "IDP" agent (the master agent for
the record, in a multi-mastering
world), and the consuming agent - some
service, today just like a SAML2 SP
server, that wishes to obtain a signed
confirmation that the user knows a
password, compared remotely by the IDP
in return for a signed confirmation
response). The user presented the
password + digested-password to the
consumer (!) seeking access to some
port, and duely the port guard would
issue a compare operation against the
IDP agent. Alternatively, the user
presented a signed token to the
consumer, which verified it in party
by "comparing" the cert against the
cert in the master record. Again, the
IDP would respond to a compare request
with a signed token confirming the
result o comparing the values. Today,
in windows its trivial to issue a
signed SAML "request" to a web service
on an https port, that is then
compared similarly. blog formats have
changed - but the model has not.
yesterday, I had some fun. In a MSFT
sample project, one has ones client
code create a "self-signed SAML file",
supported by a self-signed cert. One
posts this to a azure serivce, which
verifies the signature and returns
a mac-signed json blob - which one
then posts in the www-auth header to a
rest service. The claims within have
identity, authn and authz
claims. Being done on the OAUTH
endpoint, its a minor variant of the
process to induce the service to
redirect to a website, seeking user
confirmation etc (in the usual OAUTH
backwards-flow SSO flow), There, one
can do webid...validation as a
condition of release the authz
confirmation.
If we could get less abstract,
reseachy, and less webby - and
just fit in with the rest of the
web - we'd have a lot more
adoption.
Well there are all these other
communities to join where people are happy
to do that.
Nobody is saying we can't be
interoperable, btw, I don't know why anyone
whould thinks so. But the intersting thing
of WebID - as the name hints in a not too
shy manner - is the Webiness. Now that does
not stop you from storing your data in an
sql database, ldap dp, or nosql datastore.
We are not concerned about those here. We
abstract them so as to be compatible with
anything going on behind.
Henry
> Date: Fri, 24 Jun 2011 16:45:46
-0400
> From: demoss.matt@gmail.com
> To: henry.story@bblfish.net
> CC: kidehen@openlinksw.com; public-xg-webid@w3.org
> Subject: Re: [foaf-protocols]
WebID test suite
>
> >Its spec concepttually
little or no different to using a
directory object from ldap, looking
for existance of a cert value in the
directory attribute..
>
> >that is why I distinguish -
and we should distinguish more
clearly in the spec - between a
claimed WebID and a verified one. A
WebID presented in the SAN fields of
an X509 certificate is a claimed
WebID.
> The Relying Party/IDP then
fetches the canonical document for
each WebID
>
> I find the contrast with a
directory object to be particularly
> interesting. It's usually the
case that the CA is trusted to sign
a DN
> that corresponds to a directory
object in a directory we trust to
have
> the correct attributes, but I
would be interested to know more
about
> other systems where (as with
WebID) the trust relationship is a
bit
> different. Do any of the SAML
profiles do something you would
consider
> comparable?
>
> On Fri, Jun 24, 2011 at 4:31
PM, Henry Story <henry.story@bblfish.net>
wrote:
> >
> > On 24 Jun 2011, at 22:00,
Kingsley Idehen wrote:
> >
> > On 6/24/11 7:08 PM, Peter
Williams wrote:
> >
> > The defacto owl sameAs
part is really interesting (and its
the semweb part
> > of webid that most
interests me, since its about the
potential logic of
> > enforcement....)
> >
> > are we saying that should
n URIs be present in a cert and one
of them
> > validates to the
satisfaction of the verifying party,
then this combination
> > of events is the
statement: verifer says owl:sameAs
x, where x is each
> > member of the set of SAN
URIs in the cert, whether or not all
x were
> > verified .
> >
> > No.
> >
> > When an IdP is presented
with a Cert, it is going to have its
own heuristic
> > for picking one WebID.
Now, when there are several to
choose from I would
> > expect that any choice
results in a path to a Public Key
-> WebID match.
> > Basically, inference such
as owl:sameAs would occur within the
realm of the
> > IdP that verifiers a
WebID. Such inference cannot be
based on the existence
> > of multiple URIs serving
as WebIDs in SAN (or anywhere else).
> >
> > Yes, that is why I
distinguish - and we should
distinguish more clearly in
> > the spec - between a
claimed WebID and a verified one. A
WebID presented in
> > the SAN fields of an X509
certificate is a claimed WebID.
> > The Relying Party/IDP then
fetches the canonical document for
each WebID.
> > These documents define the
meaning of the WebID, of that URI,
via a
> > definitive description
tying the URI to knowledge of the
private key of the
> > public key published in
the certificate.
> > If the meaning of two or
more URIs is tied to knowledge of
the same public
> > key, then the relying
agent has proven of each of these
URIs that its
> > referent is the agent at
the end of the https connection.
Since that is one
> > agent, the two URIs refer
to the same thing.
> >
> >
> >
> >
> > Thats quite a claim to
make. An more restrcitied claim
could be that
> >
> > Yes, but I don't believe
the spec infers that.
> >
> >
> > verifier says webid says
owl:sameAs x, where x is each member
of the set of
> > SAN URIs in the cert,
whether or not all x were verified .
> >
> > No, don't think that's the
implication from spec or what one
would expect to
> > happen.
> >
> > Kingsley
> >
> >
> >
________________________________
> > From: henry.story@bblfish.net
> > Date: Fri, 24 Jun 2011
19:12:59 +0200
> > CC: public-xg-webid@w3.org; foaf-protocols@lists.foaf-project.org
> > To: home_pw@msn.com
> > Subject: Re:
[foaf-protocols] WebID test suite
> >
> >
> > On 24 Jun 2011, at 18:45,
Peter Williams wrote:
> >
> > one thing the spec does
not state is what is correct
behaviour when a
> > consumer is prersented
with a cert with multiple SAN URIs.
> >
> > Well it does say
something, even if perhaps not in
the best way. It says:
> > in 3.1.4
> > "The Verification
Agent must attempt to verify
the public key information
> > associated with at least
one of the claimed WebID URIs.
The Verification
> > Agent may attempt to
verify more than one claimed WebID
URI."
> > then in 3.1.7
> > If the public key in
the Identification
Certificate matches one in the set
> > given by the profile
document graph given above then
the Verification
> > Agentknows that
the Identification Agent is indeed
identified by the WebID
> > URI.
> > I think the language that
was going to be used for this was
the language of
> > "Claimed WebIDs" - the
SANs in the certificate, which each
get verified. The
> > verified WebIDs are the
ones the server can use to identify
the user. They
> > are de-facto owl:sameAs
each other.
> >
> > If the test suite is run
at site A (that cannot connect to a
particular part
> > of the interent, becuase
of proxy rules) presumably the test
suite would
> > provide a different
result to another site which can
perform an act of
> > de-referencing.
> >
> > That is ok, the server
would state declaratively which
WebIDs were claimed
> > and which were verified.
It could state why it could not
verify one of the
> > WebIDs. Network problems
is a fact of life, less likely than
strikes in
> > France - though those have
been not been happening that often
recently - or
> > congestions on the road.
> >
> >
> > This is a general issue.
The degenrate case occurs for 1 SAN
URI, obviously
> > - since siteA may not be
able to connect to its agent. Thus,
the issue of 1
> > or more multiple URIs is
perhaps not the essential
requirement at issue.
> >
> > A variation of the topic
occurs when a given site (B say) is
using a caching
> > proxy, that returns a
cached copy of a webid document
(even though that
> > document may have been
removed from the web). This is the
topic of trusted
> > caches, upon which it
seems that webid depends.
> >
> > That is what the meta
testing agent will be able to tell.
He will be able to
> > put up WebID profiles log
in somewhere, then login a few days
later after
> > having removed the
profile, or changed it and report on
how the servers
> > respond.
> >
> > We would look silly
if the average site grants access to
a resource when
> > the identity document has
been removed from the web,
> >
> > It all depends on what the
cache control statements on the
WebID Profile
> > says. If they state they
should last a year, then it is
partly the fault of
> > the WebID profile
publisher. (Could Web Servers offer
buttons to their users
> > to update a cache?)
> > In any case it also
depends on how serious the
transaction is. In a serious
> > transaction it might be
worth doing a quick check right
before the
> > transaction, just in case.
> >
> > yet cache continue to make
consuemr believe that the identity
is valid. At
> > the same time, given the
comments from the US identity
conference (that
> > pinging the internet
during a de-referencing act is
probably
> > unsunstainable), caches
seem to be required (so consuming
sites dont
> > generate observable
network activity).
> >
> > WebID works with caches. I
don't think we could think without.
Even X509
> > works with caches as is,
since really an X509 signed cert is
just a cache of
> > the one offered by the CA.
> >
> > This all seems to be
pointing at the issue that we have a
trusted cache
> > issue at the heart of the
webid proposal, and of course we all
know that the
> > general web is supposed to
be a (semi-trusted at best) cache.
> >
> > Caches need to be taken
into account. But I don't see this
as a major
> > problem.
> >
> >
> >
> >
> >
> >> From: henry.story@bblfish.net
> >> Date: Fri, 24 Jun 2011
13:37:26 +0200
> >> CC: foaf-protocols@lists.foaf-project.org
> >> To: public-xg-webid@w3.org
> >> Subject: WebID test
suite
> >>
> >> Hi,
> >>
> >> In the spirit of test
driven development, and in order to
increate the
> >> rate at which we can
evolve WebID, we need to develop
test suites and
> >> reports based on those
test suites.
> >>
> >> I put up a wiki page
describing where we are now, where
we want to go.
> >>
> >> http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite#
> >>
> >> Please don't hesitate
to improve it, and place your own
library test end
> >> points up there - even
if they
> >> are only human
readable.
> >>
> >> The next thing is to
look at the EARL ontology I wrote
and see if your
> >> library can also
generate a test report, that folows
the lead of the one I
> >> put up on bblfish.net. I
expect a lot of detailed criticism,
because I did
> >> just hack this
together. As others implement their
test reports, and as
> >> bergi builds his meta
tests we will quickly notice our
disagreements, and so
> >> be able to discuss
them, and put the results into the
spec.
> >>
> >> Henry
> >>
> >> Social Web Architect
> >> http://bblfish.net/
> >>
> >>
> >
_______________________________________________
> > foaf-protocols mailing
list
> > foaf-protocols@lists.foaf-project.org
> > http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> >
> > Social Web Architect
> > http://bblfish.net/
> >
> >
> > --
> >
> > Regards,
> >
> > Kingsley Idehen
> > President & CEO
> > OpenLink Software
> > Web: http://www.openlinksw.com
> > Weblog: http://www.openlinksw.com/blog/~kidehen
> > Twitter/Identi.ca: kidehen
> >
> >
> >
> >
> >
> > Social Web Architect
> > http://bblfish.net/
> >
>
>
Social Web
Architect
http://bblfish.net/
Social Web Architect
http://bblfish.net/
--
Regards,
Kingsley Idehen
President & CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen
--
Regards,
Kingsley Idehen
President & CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen
Received on Tuesday, 28 June 2011 21:39:31 UTC