Re: "Microsoft Access" for RDF?

Hello Paul,

I wouldn’t go as far as saying that OWL is a complete failure, but agree to most of those thoughts, especially:

> As a way of documenting types and properties it is tolerable.  If I write down something in production rules I can generally explain to an "average joe" what they mean.
+1

> you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL.
What needs to be added here is that, in practice, you often even get unexpected and unwanted facts from OWL reasoning that might effectively break your database applications — especially if you are one of those average joes.

> As a data integration language OWL points in an interesting direction
Again, +1. More than just interesting, it can even be an extremely valuable asset. And the other things you mention are in fact not addressed in OWL but could be solved in combination with other tools.

— 

Going back to your original problem (first e-mail and Einstein example), you could in principle also use our Information Workbench [1]. It can connect to any SPARQL endoint (although a Sesame repository is the preferred method). The built-in interface for each URI looks like this one [2]. Note, that the interface is editable, including adding/removing/changing triples, though only for authenticated users (so it doesn’t show in the example). In addition, you could use templates to add better customized views/forms for certain types.

Christoph

[1] http://www.fluidops.com/en/company/training/open_source
[2] http://conference-explorer.fluidops.net/resource/eswc:2014?view=table
Christoph Pinkel
 
Research & Development Engineer

christoph.pinkel@fluidops.com <mailto:christoph.pinkel@fluidops.com>
T +49 6227 3580 87 – 50
 
fluid Operations AG | Altrottstrasse 31 | 69190 Walldorf | Germany | www.fluidops.com <http://www.fluidops.com/>
 
fluidOps – Semantifying Business
 
Executive Board Dr. Andreas Eberhart, Dr. Stefan Kraus, Dr. Ulrich Walther | Supervisory Board Wolf Herzberger *, Prof. Dr. Andreas Reuter, Udo Tschira | Register Court Mannheim, HRB 709796 | VAT-No. DE258759786 
* Chairman
 
This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

> On 20 Feb 2015, at 16:09, Paul Houle <ontology2@gmail.com> wrote:
> 
> So some thoughts here.
> 
> OWL,  so far as inference is concerned,  is a failure and it is time to move on.  It is like RDF/XML.
> 
> As a way of documenting types and properties it is tolerable.  If I write down something in production rules I can generally explain to an "average joe" what they mean.  If I try to use OWL it is easy for a few things,  hard for a few things,  then there are a few things Kendall Clark can do,  and then there is a lot you just can't do.
> 
> On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL.
> 
> As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways.  For instance,  it can't convert data types (canonicalize <mailto:joe@example.com <mailto:joe@example.com>> and "joe@example.com <mailto:joe@example.com>"),  deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units.  It also can't reject facts that don't matter and so far as both time&space and accuracy you do much easier if you can cook things down to the smallest correct database.
> 
> ----
> 
> The other one is that as Kingsley points out,  the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical.
> 
> I am building an app right now where I call an API and get back chunks of JSON which I cache,  and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got.  Being able to do other kind of queries and such is sugar on top,  but being able to reconstruct an original record,  ordered collections and all,  is an absolute requirement.
> 
> So far my infovore framework based on Hadoop has avoided collections,  containers and all that because these are not used in DBpedia and Freebase,  at least not in the A-Box.  The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster,  the act of reconstituting a container would require an unbounded number of passes,  which is no fun at all with Hadoop.  (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that)  I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies.
> 
> 

Received on Saturday, 21 February 2015 19:59:58 UTC