W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

Re: WebID TestSuite Comments

From: bergi <bergi@axolotlfarm.org>
Date: Fri, 13 Jan 2012 21:53:18 +0100
Message-ID: <4F1099BE.4090209@axolotlfarm.org>
To: Henry Story <henry.story@bblfish.net>
CC: Jürgen Jakobitsch <j.jakobitsch@semantic-web.at>, public-xg-webid@w3.org
Am 05.01.2012 12:29, schrieb Henry Story:
> On 5 Jan 2012, at 10:59, Jürgen Jakobitsch wrote:
>> hi,
>> the webIDTestSuite [1] came up again (see mail with subject
>> WebIDRealm supports RDFa) and i think it's well worth an own
>> thread.
>> following are some comments and suggestions.
> Thanks that is already very helpful.
>> 1. the page is kind of out of date and should be updated (see
>> rsa#modulus example)
> yes. That was done in November before we changed the ontology.
> I fixed that on the wiki.
>> 2. i personally think the output of the test as graph is a little
>> complicated, first to produce and second to parse and interpret.
> yes. I am not sure if this is EARL that is too complicated, or it
> could also be that I just copied and pasted the output of Clerezza
> into the wiki, without making it nice and compact. The Turtle looks a
> bit like NTriples there.
> I fixed that on the wiki.
> The examples are still not the best ones. We should work out what the
> 5-6 earl tests that need to be implemented are, then I can write them
> out as an example.
>> 3. as i understand the workflow of the test it is suboptimal
>> (please correct me if i'm wrong) : when interpreting the
>> testresult, one has to infer from earl#outcome of a result that
>> something went wrong - what exactly went wrong is not known.
> There is the dc:description of the earl:Result, but it was hidden in
> a URL forest.
>> 4. i'm not sure if it is useful to return passed tests att all ("no
>> message is a good message"). i only would be interested in tests
>> that fail (in case of a failure, i want to know what exactly went
>> wrong)
> I partly agree. I think there are certain tests that one wants to
> know if they passed or succeeded, say for each each WebID claim, but
> one does not really want to know about how they succeeded when they
> did. Though one may be very interested in the cause in case of
> failure.

2, 3 and 4 could be solved with Henry's proposal to simplify the EARL
output, he mentioned already in his comments to 7.2:

The output is also the only reason why I haven't updated the code for a
while. If we can agree to a simpler version of the EARL output you can
be sure soon there will be new version of the test suite.

>> 5. i'm not sure if the test really tests what needs to be tested.
>> see [2] for example. if a validator is about to be tested it's not
>> the validator's fault if the webID is not available. it is a
>> prerequisite that the webID is available. currently the =>
>> endpoint's test <= has a "not passed" in the result if the webID is
>> not available.
> The tests are not about assigning blame to the validator necessarily.
> They are about understanding where the problem is. If someone can't
> log into your site, it would be very helpful if they could know why
> this was not possible. Not being able to fetch a Profile is important
> for a user to know. This may lead him say to reboot his server...

The test page is for two use cases: Automatic tests and users which want
to check if there WebID profile and certificate is OK. And if it's not
OK they want to know whats wrong.

The test suite contains already a test case MissingFOAF. The idea was
that a Verification Agent could have a bug and ignore the missing
profile and accept the WebID contained in the SAN of the certificate.

>> 6. many of the tests don't actually test the validator but other
>> things. see [3] for example. this test tests a certificate.
> There are some things that are not necessary. Bergi highlights those
> with stripes that are no longer needed in the diagram on 
> http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite But those are
> still open to discussion.
>> 7. so the real test for a validation endpoint looks kind of
>> different to me : "in the case of a valid certificate and a valid
>> profile (where one of the webIDClaims in the certificate points to
>> said valid profile) a validation-endpoint must authenticate" (not
>> authorize to do something). 7.1. a valid certificate must be tested
>> on its own. maybe this page [4] could help.
> yes, pkitesting service could give more in depth information about
> the certificate chain. What I think we do want to know is if the
> server thinks the certificate is valid. (date ok, SAN's available,
> RSA key) But perhaps each of these does not need to be tested
> individually as we are doing now, but the output could be just in
> english note explaining what went wrong. Perhaps if that were just
> the initial requirement, with people able to add more it would be
> faster for people to get going.
>> 7.2. in case of a valid certificate (if there's something wrong
>> there's no need to proceed) the profile must be tested (that
>> actually also not the work of a validation endpoint), a reference
>> test page, that determines if a profile is ok and valid would be
>> ideal.
> yes, I agree. Profile testing should be on a separate page.

I would not separate it. First is more difficult for the test suite.
Second a user may want to check his whole WebID (WebID profile and
certificate) and doesn't care to much about the two parts of it. But we
could at least offer these options. For example if you scroll down to
the bottom of my test page there is a form to inject a certificate. It
could also contain a text field for the WebID.


> What I think is useful is to know the cause of why something went
> wrong: - was it the page could not be fetched - was it a parsing
> mistake of the rdf - was it the query did not succeed
> I asked on the earl list if there was a way to be more specific about
> causality 
> http://lists.w3.org/Archives/Public/public-earl10-comments/2011Dec/0000.html


>  but there was no reply yet
>> 7.3. in case the profile is also considered ok and spec-compliant
>> one can try to use said certificate and the profile to login at a
>> certain validation-endpoint. we then only need to have the yes/no
>> answer from a validation-endpoint to be able to see if the
>> validation-endpoint is spec-compliant.
> yes.

If we have a cause of a failure there is a higher possibility that the
Verification Agent really failed because of the error we wanted to
produce not of something else.

>> 7.4. how to test all these things?
> Yes Bergi wrote a test suite which he put into our hg repository to
> test things as you point out below. 
> https://dvcs.w3.org/hg/WebID/file/8b4299a27d41/tests/code
> But I think what I can see here is that there is a confusion in the
> wiki page between two roles:
> A. The WebId endpoint test page which produced the earl report
> B. The Verification Agent it creates fake profiles and fake
> certificates to test the WebID endpoint
> Every implementor of WebID must produce a WebID test page producing
> earl reports A., then the Verification Agent(s) in B. can  test those
> endpoints and produce another report.
> So your endpoint would produce EARL output as shown on the wiki, So
> would mine, So would Kingsley's, etc...
> Perhaps the thing to do is to go again through the example
> http://www.w3.org/2005/Incubator/webid/earl/RelyingPartyExample.n3
> And simplify it again, as well as update it with the later ontology.
> I have worked on tying the test suite directly into the
> authentication in read-write-web, I can try to synchronise this
> again.
> Does that make sense?
> Henry
>> 7.4.1. have a page with certificate-validation. again maybe this
>> helps somehow [4] or points to other resources about cert-testing 
>> 7.4.2. have a page with profile-validation 7.4.3. create
>> n-certificates (some valid, some not), with related n-profiles
>> (some valid, some not) 7.4.4. create a list of expectations for
>> these certificates (the not-valid are expected to be rejected) 
>> 7.4.5. foreach certificate validate certificate if valid validate
>> profile if profile not available log and next if valid use
>> certificate and profile against validation endpoint => answer must
>> "accepted" if not valid use certificate and profile against
>> validation endpoint => answer must "rejected" if not valid use
>> certificate and profile against validation endpoint => answer must
>> "rejected" 7.5. as to why an expectation of a certain certificate
>> wasn't fullfilled does not necessarily have to be a community
>> effort. if i run the test and see that i let pass a certificate
>> that is expected to be rejected it is the implementor's duty to
>> find out why.
>> i the hope that at least something is considered usefull..

It is! And thanks for awaking this sleeping topic! Beside helping
developers to program spec compliant Verification Agents, I have learned
in the last telco a test suite is also one of the requirements for a
full WG.

>> wkr http://www.turnguard.com/turnguard
>> [1] http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite [2]
>> http://www.w3.org/2005/Incubator/webid/earl/RelyingParty#profileGet
>> [4]
>> http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html
>> | Jürgen Jakobitsch, | Software Developer | Semantic Web Company
>> GmbH | Mariahilfer Straße 70 / Neubaugasse 1, Top 8 | A - 1070
>> Wien, Austria | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22
>> COMPANY INFORMATION | http://www.semantic-web.at/
>> PERSONAL INFORMATION | web   : http://www.turnguard.com | foaf  :
>> http://www.turnguard.com/turnguard | skype : jakobitsch-punkt
> Social Web Architect http://bblfish.net/
Received on Friday, 13 January 2012 20:58:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:30 UTC