Re: Socio technical/Qualitative metrics for LD Benchmarks

Thank you Gio

and Leo and Milton and Adam

I am particularly interested in your ( Gio's) reply, because I think of you
as a geek,  I have exchanged snippets of correspondence  with the others
before - Adam. Milton, Leo, but this is the first time you and I find
ground to engage in a conversation. I will try to keep my answer as
telegraphic as possible-



> am i right in  understanding that you advocate some form of measuring
> the actual viability and usefulness of LOD based solutions or system?

E.g. in the way you would get by interviewing senior enterprise IT
> people etc? "why not doing it with the normal RDFBMS you have, what
> are the true/real costs/savings associated with it?
>

that too, but  not only that

>
> if so i think this is admirable, and particularly extremely useful
> however it might be outside the scope of that EU project if they want
> to create a technical benchmark across the RDF triplestore vendors and
> graph database vendors.
>

what I maintain, and can demonstrate is that no technical benchmark can be
credible without taking into account at least some socio-technical aspects

Example:

A technical benchmark that isolates say, performance, such as load speed,
is pointless, unless we can compute that the outcome of the query is
actually 'accurate' (true).

so a technical parameter such as 'speed of resolving the query' is only
meaningful if related to 'accuracy of the outcome', yet accuracy is not a
black/white thing.

This is how we ought to model a technical benchmark, making sure the
technical parameters we measure are not purely hot air costing the public
tons of good money.

Made this and related points as my contribution to the meeting during the
day through various conversations, and they all agree that increasing
technical performance while spitting out a whole load of errors
(which are not counted by the benchmark) would be insane

Everyone I have spoken with in the consortium agrees that the technical
parameters need to be wrapped into broader common sense issue, in
particular I had great conversations who people who showed support,
agreement and would be interested to see these views incorporated in the
project since what I suggested is perfectly in scope.

I want to say that I enjoyed the day and I was made perfectly welcome by
everyone

(except for an intimidatory email sent to me the next day by one consortium
member asking me 'not to contact us anymore'. which I frankly not sure how
to react to)

I feel sorry about the lack of credibility of EU semantic web research, and
I think since so many researchers benefit from its generosity, they have to
shut up.


I am interested in your suggestions below

>
> A middle ground could be to ask that the group not only benchmarks
> graph solutions e.g. RDF but also relational and nosql systems that
> can answer comparable queries given a minimum effort to be determined,
>
> At least 3 categories could be emerging:
>
> * queries which all system could answer e.g. Mongo, even Solr
> * queries which only graph and RDFdbs can answer
> * queries which only graph systems can answer (e.g. minimum path)
>
>
I need to think about this


>
> To wrap up;
>
> * if the project has a technical nature its unlikely you'll be
> successful if you speak about sociological benchmarking.
>

see above. a technical benchmark isolated by other factors is meaningless,
therefore more  waste of public money



.. BUT.. :) its not our project or at least we're not part of it. so
it really really really boils down to that consortium decisions


That brings up another issue: how are consortium decisions made... and how
are they documented, anyone who has worked with the EU knows that there is
some abuse going on in the system.... and no way of proving this is taking
place


I take you  guys are up for peer reviewing any work  that may come of this,
right? ;-)


> good luck
>

 I need it. thank you


PDM




> Gio
>
>
>  .
>
>
>
>
> On Tue, Nov 20, 2012 at 2:29 PM, Paola Di Maio <paola.dimaio@gmail.com>
> wrote:
> > d so many socio-technical dimensions crop up in the many presentations.
> It
> > would important to develop a Benchmark (or set of benchmarks) capable of
> > capturing and measuring them. I suggested that:
>

Received on Wednesday, 21 November 2012 18:56:26 UTC