W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

Re: WebID TestSuite Comments

From: Henry Story <henry.story@bblfish.net>
Date: Thu, 5 Jan 2012 12:29:50 +0100
Cc: public-xg-webid@w3.org
Message-Id: <759A617F-CFE0-4540-A69F-EE94C6C86892@bblfish.net>
To: Jürgen Jakobitsch <j.jakobitsch@semantic-web.at>

On 5 Jan 2012, at 10:59, Jürgen Jakobitsch wrote:

> hi,
> the webIDTestSuite [1] came up again (see mail with subject WebIDRealm supports RDFa) and i think
> it's well worth an own thread.
> following are some comments and suggestions.

Thanks that is already very helpful. 

> 1. the page is kind of out of date and should be updated (see rsa#modulus example)

yes. That was done in November before we changed the ontology.

  I fixed that on the wiki.

> 2. i personally think the output of the test as graph is a little complicated, first to produce
>   and second to parse and interpret.

yes. I am not sure if this is EARL that is too complicated, or it could also be that I just copied and
pasted the output of Clerezza into the wiki, without making it nice and compact. The Turtle looks a bit 
like NTriples there.

  I fixed that on the wiki.

  The examples are still not the best ones. We should work out what the 5-6 earl tests that need to be implemented
are, then I can write them out as an example.

> 3. as i understand the workflow of the test it is suboptimal (please correct me if i'm wrong) :
>   when interpreting the testresult, one has to infer from earl#outcome of a result that something
>   went wrong - what exactly went wrong is not known.

There is the dc:description of the earl:Result, but it was hidden in a URL forest.

> 4. i'm not sure if it is useful to return passed tests att all ("no message is a good message").
>   i only would be interested in tests that fail (in case of a failure, i want to know what exactly went wrong)

I partly agree. I think there are certain tests that one wants to know if they passed or succeeded, say for each each WebID claim, but one does not really want to know about how they succeeded when they did. Though one may be very interested in the cause in case of failure.

> 5. i'm not sure if the test really tests what needs to be tested. see [2] for example.
>   if a validator is about to be tested it's not the validator's fault if the webID is not available.
>   it is a prerequisite that the webID is available. currently the => endpoint's test <= has a "not passed"
>   in the result if the webID is not available.

The tests are not about assigning blame to the validator necessarily. They are about understanding where the problem is.
If someone can't log into your site, it would be very helpful if they could know why this was not possible. Not being able to fetch a Profile is important for a user to know. This may lead him say to reboot his server...

> 6. many of the tests don't actually test the validator but other things. see [3] for example.
>   this test tests a certificate.

There are some things that are not necessary. Bergi highlights those with stripes that are no longer needed in the diagram on
But those are still open to discussion.

> 7. so the real test for a validation endpoint looks kind of different to me :
>   "in the case of a valid certificate and a valid profile (where one of the webIDClaims in the
>   certificate points to said valid profile) a validation-endpoint must authenticate"
>   (not authorize to do something).
>   7.1. a valid certificate must be tested on its own. maybe this page [4] could help.

   yes, pkitesting service could give more in depth information about the certificate chain. 
   What I think we do want to know is if the server thinks the certificate is valid. 
   (date ok, SAN's available, RSA key)
   But perhaps each of these does not need to be tested individually as we are doing now, but the output could be just in english note explaining what went wrong. Perhaps if that were just the initial requirement, with people able to add more it would be faster for people to get going.

>   7.2. in case of a valid certificate (if there's something wrong there's no need to proceed)
>        the profile must be tested (that actually also not the work of a validation endpoint),
>        a reference test page, that determines if a profile is ok and valid would be ideal.

   yes, I agree. Profile testing should be on a separate page.
   What I think is useful is to know the cause of why something went wrong:
    - was it the page could not be fetched
    - was it a parsing mistake of the rdf
    - was it the query did not succeed

   I asked on the earl list if there was a way to be more specific about causality 

   but there was no reply yet

>   7.3. in case the profile is also considered ok and spec-compliant one can try to use
>        said certificate and the profile to login at a certain validation-endpoint.
>        we then only need to have the yes/no answer from a validation-endpoint to be able
>        to see if the validation-endpoint is spec-compliant.


>   7.4. how to test all these things?

Yes Bergi wrote a test suite which he put into our hg repository to test things as you point out below.

But I think what I can see here is that there is a confusion in the wiki page between two roles:

A. The WebId endpoint test page
   which produced the earl report

B. The Verification Agent
  it creates fake profiles and fake certificates to test the WebID endpoint

Every implementor of WebID must produce a WebID test page producing earl reports A., then the Verification Agent(s) in B. can  test those endpoints and produce another report.

So your endpoint would produce EARL output as shown on the wiki,
So would mine,
So would Kingsley's,

Perhaps the thing to do is to go again through the example 


And simplify it again, as well as update it with the later ontology.

 I have worked on tying the test suite directly into the authentication in read-write-web, I can try to synchronise this again. 

 Does that make sense?

>        7.4.1. have a page with certificate-validation. again maybe this helps somehow [4] or points to other resources about cert-testing
>        7.4.2. have a page with profile-validation
>        7.4.3. create n-certificates (some valid, some not), with related n-profiles (some valid, some not)
>        7.4.4. create a list of expectations for these certificates (the not-valid are expected to be rejected)
>        7.4.5. foreach certificate
>                  validate certificate
>                     if valid
>                        validate profile
>                           if profile not available
>                              log and next
>                           if valid
>                              use certificate and profile against validation endpoint => answer must "accepted"
>                           if not valid
>                              use certificate and profile against validation endpoint => answer must "rejected"
>                     if not valid
>                              use certificate and profile against validation endpoint => answer must "rejected"
>   7.5. as to why an expectation of a certain certificate wasn't fullfilled does not necessarily have to be a community effort.
>        if i run the test and see that i let pass a certificate that is expected to be rejected it is the implementor's duty
>        to find out why.
> i the hope that at least something is considered usefull..
> wkr http://www.turnguard.com/turnguard
> [1] http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite
> [2] http://www.w3.org/2005/Incubator/webid/earl/RelyingParty#profileGet
> [3] http://www.w3.org/2005/Incubator/webid/earl/RelyingParty#certificateProvidedSAN
> [4] http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html
> --
> | Jürgen Jakobitsch,
> | Software Developer
> | Semantic Web Company GmbH
> | Mariahilfer Straße 70 / Neubaugasse 1, Top 8
> | A - 1070 Wien, Austria
> | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22
> | http://www.semantic-web.at/
> | web   : http://www.turnguard.com
> | foaf  : http://www.turnguard.com/turnguard
> | skype : jakobitsch-punkt

Social Web Architect
Received on Thursday, 5 January 2012 11:30:24 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:29 UTC