- From: Tim Berners-Lee <timbl@w3.org>
- Date: Mon, 1 Oct 2007 13:55:31 -0400
- To: "Luciano, Joanne S." <jluciano@mitre.org>
- Cc: Michael Schneider <schneid@fzi.de>, Emanuele D'Arrigo <manu3d@gmail.com>, Semantic Web Interest Group <semantic-web@w3.org>, "public-owl-dev-request@w3.org" <public-owl-dev@w3.org>
Python: You can with cwm do the same thing, store data and ontology
ion the same store.
You can also put it into a mode in which it will suck in the
ontologies automatically
Javascript: The tabulator RDF library pulls ontologies into the store
and does limited inference automatically.
Tim BL
$ cwm --help
Command line RDF/N3 tool
<command> <options> <steps> [--with <more args> ]
options:
--pipe Don't store, just pipe out *
steps, in order left to right:
--rdf Input & Output ** in RDF/XML insead of n3 from now on
--n3 Input & Output in N3 from now on. (Default)
--rdf=flags Input & Output ** in RDF and set given RDF flags
--n3=flags Input & Output in N3 and set N3 flags
--ntriples Input & Output in NTriples (equiv --n3=usbpartane -
bySubject -quiet)
--language=x Input & Output in "x" (rdf, n3, etc) --rdf same as: --
language=rdf
--languageOptions=y --n3=sp same as: --language=n3 --
languageOptions=sp
--ugly Store input and regurgitate, data only, fastest *
--bySubject Store input and regurgitate in subject order *
--no No output *
(default is to store and pretty print with anonymous
nodes) *
--base=<uri> Set the base URI. Input or output is done as though
theis were the document URI.
--closure=flags Control automatic lookup of identifiers (see below)
<uri> Load document. URI may be relative to current directory.
--apply=foo Read rules from foo, apply to store, adding conclusions
to store
--patch=foo Read patches from foo, applying insertions and
deletions to store
--filter=foo Read rules from foo, apply to store, REPLACING store
with conclusions
--query=foo Read a N3QL query from foo, apply it to the store, and
replace the store with its conclusions
--sparql=foo Read a SPARQL query from foo, apply it to the store,
and replace the store with its conclusions
--rules Apply rules in store to store, adding conclusions to store
--think as -rules but continue until no more rule matches (or
forever!)
--engine=otter use otter (in your $PATH) instead of llyn for linking,
etc
--why Replace the store with an explanation of its contents
--why=u proof tries to be shorter
--mode=flags Set modus operandi for inference (see below)
--reify Replace the statements in the store with statements
describing them.
--dereify Undo the effects of --reify
--flatten Reify only nested subexpressions (not top level) so
that no {} remain.
--unflatten Undo the effects of --flatten
--think=foo as -apply=foo but continue until no more rule matches
(or forever!)
--purge Remove from store any triple involving anything in
class log:Chaff
--data Remove all except plain RDF triples (formulae,
forAll, etc)
--strings Dump :s to stdout ordered by :k whereever { :k
log:outputString :s }
--crypto Enable processing of crypto builtin functions. Requires
python crypto.
--help print this message
--revision print CVS revision numbers of major modules
--chatty=50 Verbose debugging output of questionable use, range 0-99
--sparqlServer instead of outputting, start a SPARQL server on port
8000 of the store
finally:
--with Pass any further arguments to the N3 store as os:argv
values
* mutually exclusive
** doesn't work for complex cases :-/
Examples:
cwm --rdf foo.rdf --n3 --pipe Convert from rdf/xml to rdf/n3
cwm foo.n3 bar.n3 --think Combine data and find all
deductions
cwm foo.n3 --flat --n3=spart
Mode flags affect inference extedning to the web:
r Needed to enable any remote stuff.
a When reading schema, also load rules pointed to by schema
(requires r, s)
E Errors loading schemas of definitive documents are ignored
m Schemas and definitive documents laoded are merged into the meta
knowledge
(otherwise they are consulted independently)
s Read the schema for any predicate in a query.
u Generate unique ids using a run-specific
Closure flags are set to cause the working formula to be
automatically exapnded to
the closure under the operation of looking up:
s the subject of a statement added
p the predicate of a statement added
o the object of a statement added
t the object of an rdf:type statement added
i any owl:imports documents
r any doc:rules documents
E errors are ignored --- This is independant of --mode=E
n Normalize IRIs to URIs
e Smush together any nodes which are = (owl:sameAs)
See http://www.w3.org/2000/10/swap/doc/cwm for more documentation.
Setting the environment variable CWM_RDFLIB to 1 maked Cwm use rdflib
to parse
rdf/xml files. Note that this requires rdflib.
Flags for N3 output are as follows:-
a Anonymous nodes should be output using the _: convention (p flag
or not).
d Don't use default namespace (empty prefix)
e escape literals --- use \u notation
i Use identifiers from store - don't regen on output
l List syntax suppression. Don't use (..)
n No numeric syntax - use strings typed with ^^ syntax
p Prefix suppression - don't use them, always URIs in <> instead of
qnames.
q Quiet - don't output comments about version and base URI used.
r Relative URI suppression. Always use absolute URIs.
s Subject must be explicit for every statement. Don't use ";"
shorthand.
t "this" and "()" special syntax should be suppresed.
u Use \u for unicode escaping in URIs instead of utf-8 %XX
v Use "this log:forAll" for @forAll, and "this log:forAll" for
"@forSome".
/ If namespace has no # in it, assume it ends at the last slash if
outputting.
Flags for N3 input:
B Turn any blank node into a existentially qualified explicitly
named node.
Flags to control RDF/XML output (after --rdf=) areas follows:
b - Don't use nodeIDs for Bnodes
c - Don't use elements as class names
d - Default namespace supressed.
l - Don't use RDF collection syntax for lists
r - Relative URI suppression. Always use absolute URIs.
z - Allow relative URIs for namespaces
Flags to control RDF/XML INPUT (after --rdf=) follow:
S - Strict spec. Unknown parse type treated as Literal
instead of error.
T - take foreign XML as transparent and parse any RDF in it
(default it is to ignore unless rdf:RDF at top level)
L - If non-rdf attributes have no namespace prefix, assume
in local <#> namespace
D - Assume default namespace decalred as local document is
assume xmlns=""
Note: The parser (sax2rdf) does not support reification, bagIds,
or parseType=Literal.
It does support the rest of RDF inc. datatypes, xml:lang,
and nodeIds.
$
On 2007-10 -01, at 12:54, Luciano, Joanne S. wrote:
>
> Can anyone suggest a non-Jena / non-Java alternative?
>
> And for RDF (without OWL) also?
>
> Thanks,
> Joanne
>
>> -----Original Message-----
>> From: public-owl-dev-request@w3.org
>> [mailto:public-owl-dev-request@w3.org] On Behalf Of Michael Schneider
>> Sent: Wednesday, September 26, 2007 1:37 PM
>> To: Emanuele D'Arrigo
>> Cc: Semantic Web Interest Group; public-owl-dev-request@w3.org
>> Subject: RE: Is the ontology structure stored seamlessly with its
> data?
>>
>>
>> Hi, Emanuele!
>>
>> Emanuele D'Arrigo wrote at September 26, 2007:
>>
>>> Another thing that is not quite clear in my mind right now is this:
>>> are the sets of triplets describing the class and property
> hierarchies
>>> of an ontology normally stored seamlessly alongside the data that
>>> is classified and characterized by those classes and properties?
>>
>> With OWL, for which an RDF mapping exists, this is technically
> possible
>> without a problem. And when you, for instance, use JENA [1], a
>> well known
>> RDF framework for Java, you generally /work/ with ontology
>> based knowledge
>> bases in such a way (at least in principle).
>>
>> With JENA, you typically build a view to your knowledge base in the
>> following way:
>>
>> 1) Create a so called "Model", which is empty at the beginning
>>
>> 2) Read into this Model the RDF statements representing the
>> axioms of your
>> OWL ontology
>>
>> 3) Read into this Model the RDF statements of your knowledge base
>>
>> A "Model" in Jena represents an RDF graph, i.e. a set of RDF
>> triples. Now,
>> as long as you use a pure "Model", this only gives you a view to the
>> combined set of RDF triples, which come from both your OWL
>> ontology and your
>> knowledge base. But if you instead use an "OntModel" (which stands
>> for
>> "Ontology Model"), you get an extended view to your RDF graph:
>> Suddenly, you
>> have additional API functionality to access all your OWL classes and
>> properties, and the (explicit) sub-relationships between them (and
> many
>> other ontology specific features). The magic behind this is that the
>> OntModel internally separates out all those triple subsets within the
>> combind RDF graph, which are RDF mappings for OWL axioms.
>>
>> So this is the situation (or at least a possible and perfectly
>> working
>> situation), when you /work/ with knowledge data. This does
>> not, however,
>> mean that you should also /store/ ontological and assertional
>> data together
>> in the same RDF graph. I think, in most cases it will be a
>> better strategy
>> to have them separately stored. Then, you can easily reuse the
>> ontology for
>> different knowledge bases, and combine them /on the fly/,
>> whenever you want
>> to work with them.
>>
>> Cheers,
>> Michael
>>
>> [1] http://jena.sourceforge.net/ (JENA project page)
>>
>> --
>> Dipl.-Inform. Michael Schneider
>> FZI Forschungszentrum Informatik Karlsruhe
>> Abtl. Information Process Engineering (IPE)
>> Tel : +49-721-9654-726
>> Fax : +49-721-9654-727
>> Email: Michael.Schneider@fzi.de
>> Web : http://www.fzi.de/ipe/eng/mitarbeiter.php?id=555
>>
>> FZI Forschungszentrum Informatik an der Universität Karlsruhe
>> Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
>> Tel.: +49-721-9654-0, Fax: +49-721-9654-959
>> Stiftung des bürgerlichen Rechts
>> Az: 14-0563.1 Regierungspräsidium Karlsruhe
>> Vorstand: Rüdiger Dillmann, Michael Flor, Jivka Ovtcharova, Rudi
> Studer
>> Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
>>
>>
Received on Monday, 1 October 2007 17:55:52 UTC