RE: Research Illusion

Google scholar provides citation counts, which while still a fairly rough measure, does include an idea of the importance of any piece of work.

In particular citation counts can be high for a good piece of research engineering and one paper about it. (Jena follows this model). This certainly helped with my US Visa application (they look at citation counts).

Jeremy

> -----Original Message-----
> From: semantic-web-request@w3.org [mailto:semantic-web-request@w3.org]
> On Behalf Of Azamat
> Sent: Friday, May 08, 2009 10:20 AM
> To: [ontolog-forum] ; 'SW-forum'
> Cc: mjarrar@cs.ucy.ac.cy
> Subject: Research Illusion
> 
> By chance, i encountered Mustafa Jarrar's blog site,
> http://mjarrar.blogspot.com/; http://www.jarrar.info/, making  ontology
> engineering, Linked-data, web 3.0, somewhere here on the island. Never
> heard
> of him, but it is a true mind full of true thoughts.
> Here is a shockingly telling extract (for me at least as i left the
> Academy
> long time ago)::
> [Communications of the ACM
> Volume 50, Number 11 (2007), Pages 19-21
> 
> Viewpoint: Stop the numbers game
> David Lorge Parnas
> 
> As a senior researcher, I am saddened to see funding agencies,
> department
> heads, deans, and promotion committees encouraging younger researchers
> to do
> shallow research. As a reader of what should be serious scientific
> journals,
> I am annoyed to see the computer science literature being polluted by
> more
> and more papers of less and less scientific value. As one who has often
> served as an editor or referee, I am offended by discussions that imply
> that
> the journal is there to serve the authors rather than the readers.
> Other
> readers of scientific journals should be similarly outraged and demand
> change.
> 
> The cause of all of these manifestations is the widespread policy of
> measuring researchers by the number of papers they publish, rather than
> by
> the correctness, importance, real novelty, or relevance of their
> contributions. The widespread practice of counting publications without
> reading and judging them is fundamentally flawed for a number of
> reasons:
> 
> * It encourages superficial research. Those who publish many hastily
> written, shallow (and often incorrect) papers will rank higher than
> those
> who invest years of careful work studying important problems; that is,
> counting measures quantity rather than quality or value;
> * It encourages overly large groups. Academics with large groups, who
> often
> spend little time with each student but put their name on all of their
> students' papers, will rank above those who work intensively with a few
> students;
> * It encourages repetition. Researchers who apply the "copy, paste,
> disguise" paradigm to publish the same ideas in many conferences and
> journals will score higher than those who write only when they have new
> ideas or results to report;
> * It encourages small, insignificant studies. Those who publish
> "empirical
> studies" based on brief observations of three or four students will
> rank
> higher than those who conduct long-term, carefully controlled
> experiments;
> and
> * It rewards publication of half-baked ideas. Researchers who describe
> languages and systems but do not actually build and use them will rank
> higher than those who implement and experiment.
> 
> Paper-count-based ranking schemes are often defended as "objective."
> They
> are also less time-consuming and less expensive than procedures that
> involve
> careful reading. Unfortunately, an objective measure of contribution is
> frequently contribution-independent....]
> 
> Another reason for building Common Ontology Standards: to establish a
> safe
> conceptual filtering of all sorts of research head games in critical
> knowledge fields and publicly-funded research projects.
> 
> Azamat Abdoullaev
> http://www.eis.com.cy
> 
> 
> >
> 

Received on Friday, 8 May 2009 19:42:40 UTC