W3C home > Mailing lists > Public > public-awwsw@w3.org > May 2009

Re: JAR's exploration of TimBL's notion of information resource

From: Jonathan Rees <jar@creativecommons.org>
Date: Wed, 27 May 2009 11:53:51 -0400
Message-ID: <760bcb2a0905270853x1b0cae5fxfec63f3668fc8354@mail.gmail.com>
To: Alan Ruttenberg <alanruttenberg@gmail.com>
Cc: AWWSW TF <public-awwsw@w3.org>
By the way, there are other ways one might be able to do
ontology-based quality control.

One might be to define a class "good generic resource" of
audit-passing things, a proper set of generic resources, and do tests
that a site's URIs name such things. For example, a (manual) audit
could detect that the French and Spanish versions of a static page say
different things, and flag the situation, under the assumption that
all simultaneous representations of a good IR ought to say the same
thing. (There is no such restriction on generic resources.) There
ought to be similar situations where an automatic audit could
determine inconsistencies, using a reasoner.

Another thing to do is a sort of ISO 9000 process, checking to see
whether a site does what it says it does. That is, look at any RDF the
site publishes about itself (metadata), and see whether it lives up to
its own promises. For example, if the site says that X is a fixed
resource, and it isn't, then that's a lie (mistake) and could be
flagged. Checking for this statically might be hard, but a dynamic
check or preventive harness might be an alternative.

I think the reason "generic resource" ends up being so inclusive is
that it can't exclude any ordinary web pages - including those that
don't live up to any webarch-like quality standard. (It can exclude
things like http://purl.org/dc/elements/1.1/creator and
http://www.w3.org/2001/XMLSchema that are clearly being used to name
other things.) This is the legacy of RDF's genesis as a metadata
language for web pages.

On Tue, May 26, 2009 at 1:18 PM, Alan Ruttenberg
<alanruttenberg@gmail.com> wrote:
> For the moment I will simply point out that one of the initiations of
> this group was that we wanted to be precise about how one would do an
> audit to see whether a server was was conforming to web architecture.
> It seems to me your answer is "you look at the representations and
> decide". That isn't a particularly satisfying answer, but if that's
> it, then we might as well shut the thing down and declare success.
>
> -Alan
>
> On Tue, May 26, 2009 at 1:10 PM, Jonathan Rees <jar@creativecommons.org> wrote:
>>> Right, and I have no way to assess (so far) what is or isn't evidence
>>> other than the response code being 200.
>>
>> You look at it. If G is Moby Dick, and the entity isn't a wa-representation
>> of Moby Dick, then the URI isn't meant to name Moby Dick.
>
Received on Wednesday, 27 May 2009 15:54:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 May 2009 15:54:30 GMT