W3C home > Mailing lists > Public > semantic-web@w3.org > October 2017

Re: [DBpedia-discussion] Semantic Web Browser

From: Olivier Rossel <olivier.rossel@gmail.com>
Date: Wed, 11 Oct 2017 23:59:54 +0200
Message-ID: <CAM0wMfS86pqZ35Zqi_WFLxZD5x6th6kf2ScvDSCL_29jeywmjg@mail.gmail.com>
To: Adrian Gschwend <ml-ktk@netlabs.org>
Cc: Semantic Web <semantic-web@w3.org>
Three things come to mind from your comments:

- you have reached a maturity level with semantic technologies. I am
absolutely thrilled
to hear that! (And I imagine you must have had a LOT of work to reach
that level). Honestly, I am
impressed.

- RDF can be seen as a typed data transfert technology.
 Javascript is becoming more and more typed. (yes, Typescript, I look at you :)
 I see a HUUUUUGE opportunity for N3 and RDFS to become the JSON of Typescript.
 I hope some people will push in that direction and succeed. (you just
need to manage annotations on fields,
 to serialize/deserialize a graph of objects as N3. Just like OGM with Neo4J.)

- Spreadsheets are perfect example of data silos. Making a bridge
between them and distant data, especially in companies,
sounds a low-cost high-impact option. As far as I know, Microsoft is
pushing OData for that. I have never
had time to investigate. But basically, a kind of Datao visual query
UI available from the "Get and Transform" ribbon
of Excel 2016 (https://support.office.com/en-us/article/Get-Transform-in-Excel-2016-881c63c6-37c5-4ca2-b616-59e18d75b4de?ui=en-US&rs=en-US&ad=US)
would be an instant invitation to anyone to reuse data from others in
the company, bootstraping a Linked Data initiative.


On Wed, Oct 11, 2017 at 5:07 PM, Adrian Gschwend <ml-ktk@netlabs.org> wrote:
> On 11.10.17 10:09, Olivier Rossel wrote:
>
> Hi Olivier,
>
>> I have felt absolutely no support from the Semantic Web community.
>
> I think I know where you have been but I would like to give a feedback
> as someone that is making a living on Semantic Web.
>
>> Basically for the following reasons:
>>  - very few people in the Semantic Web community actually manage
>> datasets in operational conditions (so there is no linked data to
>> browse, cf http://sparqles.ai.wu.ac.at/availability)
>
> We do, as do our customers. Just because you don't see them does not
> necessarily mean "very few people" are using it. In the end I don't see
> any Neo4J datasets online but I don't conclude that no one is using it
> just based on that fact.
>
>>  - very few people in the Semantic Web community actually consume
>> semantic data in their processes (so noone can evaluate which
>> libraries/tools are lacking for a proper consumption of RDF data)
>
> I do think that there is a problem with dumps someone once converted to
> once RDF, interlinked and that was it. Unfortunately quite some projects
> like that exist, many of them as outcome of former FP7 EU research
> projects. While I was skeptical myself for a while I am convinced that
> we now see real traction in governments and organizations moving to
> production-ready linked data publication. Unfortunately we all suck a
> bit in advertising them accordingly.
>
> Regarding libraries, I preach for a while that Linked Data needs to be
> better in JavaScript stacks and for that we created the RDFJS group some
> time ago. The spec there is now basically done and we (Zazuko) released
> a first implementation of it:
>
> https://www.bergnet.org/2017/08/rdf-ext-v1-release/
>
> You can find all code on Github. We do a hell lot of other useful stuff
> in the JavaScript world, just browse our github repos. And yes,
> documentation is again the thing we need to improve.
>
> http://github.com/zazuko/
>
>> But of course our point is to inspire people outside the Semantic Web community.
>> And such people/companies have immediate requirements to fullfill.
>> So they go the full custom HTML5+JSON way. With pretty amazing results.
> We go the HTML5 + JSON-LD way. Also with amazing results :) The graph
> might not be that important in the user interface (still helpful though
> for many things) but you definitely want a graph-like structure in the
> backe-end if you are serious about your data.
>
>> They know RDF very well, but see no market for that.
> Not sure if you heard about the famous quote about Unix: "Those who do
> not understand Unix are condemned to reinvent it, poorly." I believe
> this is true for RDF as well.
>
> I am using Opendatasoft as one of our customers is using it. I pull data
> that is barely usable in this form from it and transform it to proper
> linked data. That is quite a task but if you want to get data in a
> usable form, that's the only way to go. I see platforms like CKAN and
> Opendatasoft publishing large data sets but people still have to spend
> days to bring it in a form that allows them to use it in their
> applications. RDF is the form they actually need and sooner or later
> they will come to this conclusion.
>
> Opendatasoft would be fare more useful if it would be built on RDF. That
> is a bit a problem we have, we forget to build the tools "normal" users
> can use so in that regard I'm with you.
>
>> We must understand why.
>
> RDF comes with a certain complexity. You can either ignore it and build
> yet another proprietary silo or embrace it and start to build tooling
> that facilitates creation and consumption of RDF data. We as Zazuko
> chose the second option and I absolutely believe that this is the
> future-oriented one. It does pay our bills and it does and will solve
> real problems our customers try to crunch.
>
> It's not that these customers did not try other stacks before, some
> spent two years addressing the problem with non-RDF approaches to figure
> out that this will not lead anywhere. But you will not read about those
> use cases as they happen behind the firewalls.
>
>>>From my own point of view, the success of the Semantic Web could come
>> with tooling for programmer
>> If we manage to provide a few things:
>>  - a spec & robust implementations for rights management at named graph level
>
> get the right triplestore and you are done. Not sure what you need a
> spec for here.
>
>>  - a spec & robust implementations for SPARQL transactions management
>> at HTTP level
>
> again get the right triplestore and you are done.
>
>>  - a robust OGM (Object-Graph Mapper) in most major languages
>
> What is an Object-Graph mapper?
>
>>  - a robust REST library to auto-serialize/deserialize RDF (for ex, an
>> extension to Jersey
>
> We do this with Hydra-View, soon to be documented and renamed to Hydra-Box:
>
> https://github.com/zazuko/hydra-view
>
> https://github.com/zazuko/wikidata.zazuko.com
>
> This is work in progress for a customer but what it basically allows is
> to hide SPARQL queries behind a hypermedia REST API. All you do is
> configure the middleware in JSON-LD and then run it. With JSON-LD
> framing you can create JSON every web developer should be able to understand
>
> BTW we also do build Web frontends that are 100% SPARQL driven, that is
> not a problem at all, you just need according tooling. See
> http://data.alod.ch/search/ as an example (soon to be extended to 20
> million public archival records)
>
> If you simply want to get proper JS structures from SPARQL you might for
> example check out https://github.com/zazuko/d3-sparql
>
> Again, it's all our fault that we did not build these tools before. And
> yes, that would have been more useful than a "linked data browser",
> whatever this means.
>
> For providing data we did something Pubby-like, see for example:
>
> https://github.com/zazuko/trifid
>
> This is the back-end of all apps we build. Public samples:
>
> * Default view: http://lod.opentransportdata.swiss/didok/8500011
> * Customized for one gov entity in Switzerland:
> https://ld.geo.admin.ch/boundaries/municipality/296
>
>>  - a proper marketing of the N3.js library on the client (honestly,
>> how many people even inside our community knows that fabulous lib?)
>
> Can't say much here as we use it for a long time in rdf-ext. Also you
> want to have at RDF-Ext and things like simpleRDF.
>
>> Basically, we need a stack.
>> Why not create RDFonRails, by the way :)
>
> the future is JavaScript, like it or not. That is where it will or will
> not happen with RDF. (And stores that scale but that is taken care of)
>
>> (btw, Neo4J basically provide 90% of all that, and is pretty
>> successful, so may be we should just jump on the bandwagon)
>
> Neo4J somehow managed to convince people that they are "graph
> databases". It's a mediocre product from a company which willingly
> spreads FUD about RDF because they know it's the one thing that can
> become a problem for them. But I have to give them the credits for
> marketing, boy are they good in that.
>
>> After that, we can again concentrate on data. (especially data inside companies)
>> Honestly, noone outside the community understands (or cares) about OWL.
>> RDFS+owl:domain/owl:range is enough for a awful LOT of usages.
>> (once again, Neo4J provides something quite like that, and it is LOVED
>> by IT developpers)
>
> wrong (about OWL). There are people using it, every day. Just because
> you do not see it does not mean it does not happen. I saw developers
> with tears of joy once they enabled reasoning in the triplestore. And
> why would I use something proprietary like Neo4J if I can get various
> products in the RDF domain that implement a standard?
>
>> What is important and game changers in the outside world is:
>>  - typing data, and multityping it (:VERYYYYY powerful)
>
> I don't think I get what you mean here.
>
>>  - merging graphs coming from different sources dealing with the same
>> resources for a more capable graph
> That's pretty much what we do with RDF all day.
>
>> What is extremely hard in the outside world:
>>  - sharing URIs.
>>  - sharing data, in general
>>
>> All these points are addressed poorly by the community. Basically
>> because we do not do it massively ourselves.
>
> I have no idea where you see a problem here. RDF is the only standard to
> solve these exact problems, forget Neo4J or anything alike for that. And
> we do it, every day.
>
>> But the more important advice I can give after some time spent outside
>> the Semantic Web community:
>> do not build a browser (you would rebuild datao.net/search.datao.net.
>> Believe me, noone cares.), build a fucking awesome add-on for
>
> on that I'm with you. BTW I do not necessarily consider myself as part
> of the academic semantic web community. While I am grateful for a lot of
> support I got from some of the people in there, others are not very
> welcoming for outsiders like me.
>
>> Microsoft Excel.
>>
>> *That* would definitely change the way people deal with data.
>
> I hear the excel one a lot but I'm convinced most of the time this is
> un-reflected daydreaming. What exactly should such a plugin provide in
> your opinion?
>
>
> regards
>
> Adrian
Received on Wednesday, 11 October 2017 22:00:39 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 11 October 2017 22:00:47 UTC