Re: [BioRDF] URI Resolution

On Feb 5, 2007, at 1:49 PM, Xiaoshu Wang wrote:
>> Indeed. What I spelled out could be implemented in javascript and  
>> the javascript put in the rdf directly. Then it would be simply a  
>> matter of saying javascript "eval". Think ahead!
> So, an email client need to run Javascript or something like that.
Is that a problem? There are free implementations of javascript. Most  
modern mail clients have one anyways, in order to deal with html  
mail. Many phones have one now, or will soon. Any barrier to this  
will be overcome by technology advances in the very short term, it  
seems to me.

>   Isn't this proposing some uniform treatments of URI because  
> otherwise, where is the interoperability?
Not sure what you mean. I think the answer is: "of course  that's  
what this conversation has been about".

> Sure, RDF is good representing knowledge.  But should we abuse RDF  
> to do the thing that it should not do? I remember somewhere in the  
> web that joked about using XML to encode IP address. Sure, it could  
> be done, right? But there is a reason that we don't.
What is that reason? Is the reason still valid?

>> Not logic, procedure. But javascript will do. So I am not worried.  
>> I am already advocating that OWL include some sort of safe(in the  
>> computational complexity sense) computed property values. So let's  
>> anticipate that something like a property definition that is a bit  
>> of javascript code which is able to do SPARQL queries on the  
>> triple store in which it is embedded. If we have this we have all  
>> we need - we no longer have to ask for the method and then  
>> interpret it ourselves - we just ask for the property value and  
>> the (extended) DL reasoner runs the javascript and returns the  
>> result.
> DNS is procedure that intends to resolve name into IP.  Now, in  
> your case, every machine on the internet should be backed by an RDF  
> engine.
Every machine *that will traffic in RDF* should be backed up by an  
RDF engine. But then, that stands to reason.

>> Yes. Good point - you are starting to get it. I forgot to include  
>> that triple. So 3 instead of the 2 triples I initially wrote are  
>> sent.
> So, each the entire RDF document will be tripled?
No. Only messages which were formerly a single URI sent between  
agents are tripled. In larger documents you push the resolution  
information into a class and let it be inherited. But it is true that  
I expect each URI in document should have a rdf:type associated with  
it. Typically they do.

> I thought David and Mathias were saying that shouldn't happen if  
> you describe the semantics of string that forms the URI.  But  
> please note the difference, one is an Ontology describe the  
> meanings of a strings, which in turn describes the meanings of some  
> resources.

I don't know what you mean by this.

>> The attached "chunk" includes the ontology in the slides, as well  
>> as the descriptions of the resources. So you have an existence  
>> proof. I could also split this into two "chunks" one which was the  
>> uri ontology, and the second the specific resources being  
>> described which owl:imports the other. Not really sure what's  
>> confusing here - please explain what the problem is.
> Let's assume I have following statements originally located at  
> "http://foo.com/"
>
> http://foo.com/#foo1 rdfs:subClassOf http://foo.com/#foo_2.
> http://foo.com/#foo2 rdfs:subClassOf http://foo.com/#foo_3.
> ...
> http://foo.com/#foo_n-1 rdfs:subClassOf http://foo.com/#foo_n.
> http://foo.com/#foo_n rdfs:subClassOf http://bar.com/#bar_1.
>
> Now, if let's assume http://foo.com/ is moved to someplace else,  
> don't you have to explicitly describe all the resource one by one?  
> How do you describe them in chunk?

You would add

Class(http://bar.com/#bar_1 Partial Restriction(getMethod hasValue 
(transformingURIRetrievalForFoo))
Individual(transformingURIRetrievalForFoo
     type(transformingURIRetrieval)
     value(matchingPattern "http://foo.com/(.*)")
     value(replacementPattern "http://blarg.com/$1"))

How do you get this information? Several possibilities - one is that  
the original publisher of the resource had the foresight to include  
an additional getMethod that, like LSID, is able to go to another  
resource to look up a new location if the current location fails. Or  
perhaps a group of parties interested in http://foo.com has  
collectively archived foo.com so that when it goes down they have  
some recourse, and you are part of this group. Or perhaps the company  
that employs you has made a cache of the site because of the  
importance of the resource to their business. Nothing is for free,  
but the point is that there is now some mechanism to deploy the  
relocation information if such information is available.

I expect that should we start building this ontology there will be no  
shortage of mechanisms proposed to try to ensure longevity of a  
resource. Some of these will rely on redundant technology and others  
will depend on social organizations and others.... we'll see what  
happens.

-Alan

Received on Monday, 5 February 2007 19:25:28 UTC