Re: ISSUE-95 Discussions

On 22/01/2016 4:48 AM, Arthur Ryman wrote:
>> Some suggestions on the vocabulary's format:
>>
>> - Following other vocabularies such as SKOS, the graph URI and owl:Ontology
>> should be http://www.w3.org/ns/shacl, without #. We don't need to use
>> owl:Ontology for this pure RDFS model - it could just be an rdfs:Resource.
> I am following the W3C conventions. Have a look at, e.g.
> https://www.w3.org/2000/01/rdf-schema#
> Here is a snippet:
>
> <http://www.w3.org/2000/01/rdf-schema#> a owl:Ontology ;
> dc:title "The RDF Schema vocabulary (RDFS)" .
>
> rdfs:Resource a rdfs:Class ;
> rdfs:isDefinedBy <http://www.w3.org/2000/01/rdf-schema#> ;
> rdfs:label "Resource" ;
> rdfs:comment "The class resource, everything." .
>
> rdfs:Class a rdfs:Class ;
> rdfs:isDefinedBy <http://www.w3.org/2000/01/rdf-schema#> ;
> rdfs:label "Class" ;
> rdfs:comment "The class of classes." ;
> rdfs:subClassOf rdfs:Resource .
>
> Note the use of owl:Ontology and rdfs:isDefinedBy. The ontology URI is
> the same as the namespace. The terms defined by the ontology are
> linked to the ontology using rdfs:isDefinedBy.

I have checked with Richard, who I assume knows a bit about the common 
Linked Data patterns. His experience is similar to mine, and he believes 
while there is no official rule, the practice with # in the owl:Ontology 
is uncommon nowadays. The reason is that the owl:Ontology represents the 
graph, which should be equivalent to the URL from where the vocabulary 
is imported. When you send someone a link, you also don't say 
http://example.org/page.html# (even though both resolve the same page) 
and similarly the canonical representation of a graph should be the URI 
without # at the end.

I'd be happy to turn this into a formal ISSUE if you continue to think # 
is the right way forward.

>
>
>> - I think we should minimize dependencies on other namespaces, i.e.
>> references to oslc should be removed. Likewise, dcterms can often be
>> replaced with rdfs:label and rdfs:comment.
> Disagree. We should make use of other namespaces when they define the
> terms we need. Again, refer to the RDF vocabulary which uses DC. The
> OSLC namespace is only there for defining namespace prefixes which
> appear in the HTML generated from the Turtle.

Ok, we can remove the OSLC namespace once the document is ready for 
final publication.

>
>> - When we use rdf:XMLLiteral, then the values should be valid XML (which
>> isn't the case in your file), so we should either switch to rdf:HTML or use
>> plain text (xsd:string without hyperlinks).
> They are valid XML fragments AFAIK. The RDF spec does not require them
> to have root elements. Where do you think they are not valid XML? I am
> surprised that Jena+XSLT would work with invalid XML. The use of
> XMLLiteral versus HTML is historical since HTML was only added
> recently. Also, XML is easy to process. The conversion to an HTML
> document is currently done by converting the Turtle to RDF/XML using
> Jena, and then applying an XSLT to the XML to generate HTML. These
> files are also in github. A more modern approach would be to extend
> ReSpec and do this all in Javascript. Let's leave the XMLLiteral in
> until we can replace the XSLT with Javascript.

My version of Jena complained about these literals so I asked Andy. His 
response is:

The line ending is \r\n and the c14n form is \n
The lexical space of valid XML Literals is canonicalized strings.  It's 
a warning - ignorable.
And in this case it is a saviour for you - to me on Linux I have a 
different XML literal to you.

So the problem is with the line breaks. rdf:HTML doesn't seem to have 
this problem. While obviously the current state of a code generator 
should not dictate the triples in our W3C proposal, I am OK with 
temporary hacks. I am surprised though why the HTML generator would only 
work with rdf:XMLLiteral but not rdf:HTML?

>
>> - rdfs:isDefinedBy is just noise and extra maintenance burden for now; I
>> would drop them.
> No, rdfs:isDefinedBy is the way to link an RDF term with its ontology.
> My XSLT relies on that. It also lets vocab information live in a
> triple store with other vocabs. You can then get all the terms for a
> given vocab using a SPARQL query.

Again, I don't like carrying around extra triples just for the sake of a 
particular XSLT implementation. These triples are trivial to 
auto-generate at any point in time. Having said this, for the purpose of 
making progress I will try to edit them in (although I expect this to be 
error-prone). Better would be to leave them out for now and put them 
back in on the day prior to publication.

Holger

Received on Sunday, 24 January 2016 23:46:06 UTC