test suite and "loading root files" - countering lack of independence in trials

The thread on apple products and SSL pointed out something we discussed in
the last call: that test result are not independent of the "loaded root
file" on the resource server. Less prosaically, the tests depend on the
configured roots - where folks now understand that different virtual SSL
endpoints can have difference root lists configured, on servers. These root
lists VARIOUSLY affect the browser/server SSL behavior, and the
server/foaf-repository channel. 

 

Its not only that loading an root file per se onto a virtual directory may
or may not make a particular apple product version behave like Mozilla, it
that obviously the contents of that file also impact the test - in various
per-config and per-product specific ways.

 

For example, if one adds cacert to the root file, bergi tests will no longer
fail due the inability of resource servers merely connect (over SSL) to the
subscriber's foafcard. If one accept the judgment of whoever formulated the
trust file originally (someone making a linux distribution, hosting the test
suite), it will fail (since cacert is rarely included in root files of
vendors).

 

Now Henry teaches us that the federated social network should be a
cloudsourced type environment - to be webby : what the vast majority do in
trust, is what should be the status quo - as reflected in default config
files of vendors, say. Since the vast majority of browsers vendors are
prejudiced against cacert as are many of the linux distribution vendors,
cacert is evidently "tainted."

 

Actually cacert is not tainted, but it is discriminated against by vendors
which believe that million $ audits should be the deciding factor.

 

If one looks at the behavior in the vendor community (perhaps some 50
people, all in all, influenced by some 500 funding sources, perhaps), one
finds a strong bias - a follower the leader type model ("lemming over the
cliff"). Thus, the 50 cases are not very independent. Evidently, the 6
billion other opinions don't count, and of the 50 that do, only 5 are
arguably independent (with Microsoft setting the standard for objectivity).
Controlling the opinion of Mozilla directly impacts the test results seen by
6 billion. Even Microsoft can be swayed, eliminating lots of CAs recently
(to reduce "consumer confusion").

 

Ignoring what folks do operationally based on vendor biases, what do we do
in the test suite? How do we counter the lack of independence? Is the CAB
forum fair and objective, perhaps? Should we be doing only that which the US
govt advocates? (byebye Cuban CAs, Palestine CAs., if so.)

 

We could state our own trust file of course, to be used so the assumptions
of the test are common - removing the platform/linux distribution
dependencies, at least?

 

Would *we* include cacert, if we made our own root list - so a test on
bergi's endpoint gets the "correct" answer? How would we decide to do that
(its not exactly a chartered role)?

 

A few models come to mind: pope choosing model (conclave), old olympic city
choosing model (old boy network bribery), FIFA model (maximize sporting
innovation and new avenues), W3C model (paying organizations get preference
votes), old ISO model (US state dept gives instructions, withdrawing perks
from individuals who deviate), traditional security model (anonymous "expert
group" decides, using secret protocols)

 

Let's never forget that majorities tend to oppress minorities. CAcert is a
lovely example of a minority CA working using social community principles.
It OUGHT to be something federated social web folk would like. But, its
actually discriminated against by the Web vendor community - working on
corporate principles.

 

Hopefully, the Berlin paper on webid, focused on social web, can address the
issue - finding a new way to approach the problem - featuring something that
the semantic web can do that other vendor-centric approaches on trust do not
(or can not, given the money trail) 

Received on Saturday, 30 April 2011 15:02:55 UTC