Re: Socio technical/Qualitative metrics for LD Benchmarks

Hey Gio


Here my geek background helps, i am positive you're off.


he he. attempting meaningful exchange with a geek has never been without
risks ;-)

but these exchange can help me better understand perhaps the point that I
need to attempt to clarify



> DB results
> (SQL, SPARQL etc)  are always accurate "find me all entities that have
> this or that property". The closest you get is when you have some
> OPTIONAL clause .. "if possible this or that" and you might be able to
> use that to rank
>

  probably a bit more complicated than that,

I hope you and I can have a good discussion about this, and about the
points you make that follow - when the opportunity arises. deserve more in
depth discussion, maybe will write a paper (got a long 'to do' list)

>
>
> Beware  getting yourseld into the hot air productionnow. If you're
> saying we should test ranking quality of semantic information
> retrieval system i am with you, but measuring ranking can be done in
> technical terms with well known methods
>

The point that I think I am trying to make is that there are correlations
among diverse parameters in an open systems that we may not yet be aware of
( scope is impacted by the where the conceptual boundary of a problem space
is placed). I practice complex sciences and may be able to share my
references, lectures and research notes


>
>
> i am not sure about the views at this point. I thought you wanted to
> benchmark usefullness/ROI or similar of using linked data

technologies.


yes, yes - that too!  one thing is related to another, remember? :-)


> but if you're saying "results are correct" i dont get
> it.
>

I agree I need to make a better case, I ll start working on a (nth) paper
to present to the consortium to explain what I mean


>
> > I feel sorry about the lack of credibility of EU semantic web research,
> and
>
> I think since so many researchers benefit from its generosity, they have
> to
>
> shut up.
>

> in general i see where you're coming from and i think you're

partially right.


partially right is good, thanks-   i consider that good.

>
> There should be a peer review mechanism that downgrades the ability
> for those that executed projects that really led to nothing to get
> more funding. You'll get an entirely different attention to the "what
> they hell we'll be doing, for real" as opposed to just passing
> immediate short term reviews which allwant to see passing anyway.
>

agreed

> y
> >
>
> its not meaningless. you might say "it risks missing out on important
> factors without which one could miss the core point which is adoption
> and societal usefullness"
> it is still meaningful in technical term.
>

accepted


>
> In general, if you argument your point properly and make it public
> e.g. on a blog or whatever and yur point makes sense i am sure that
> the consortium will have to discuss this with their project officer
> eventually
>

yes, I am really busy, and only have time for a limited number of crusades.
I think discussing on the list is as public as it gets for the moment, I
have also emailed relevant project members so that they are aware the
discussion is going on and can consider it (or continue to ignore it)
accordingly.


>
> > That brings up another issue: how are consortium decisions made... and
> how
> > are they documented, anyone who has worked with the EU knows that there
> is
> > some abuse going on in the system.... and no way of proving this is
> taking
> > place
>
> People play by the rules that are given. The rules should be changed
> if we want to use better public money and obtain real benefits. Nobody
> getting this money in large quantities due to well oiled mechanisms
> will want to change the rules, really. If you speak with very smart
> people at project officer level they silently nod when you say things
> like the king is naked but its clear.. they really cant do much
> themselves to change the rules. And in general its such a large
> machine anyway.
>

yes and no.  in *transformative* circles, we try to shape the systems which
are designed to serve the community (publicly funded research should not
only serve the few industry leaders)  but we need not to be afraid to say
when it is obvious that research bias is being injected into a publicly
funded project for non academic/research reasons but for some purely
political/administrative one. I think we play a role in how systems evolve.

Big topics, thanks for discussing them with me



Cheers

Paola DM

>
>
> Gio
>

Received on Friday, 23 November 2012 12:07:33 UTC