Re: Research Illusion

This issue is also at the heart of the open access versus paid peer reviewed traditional scientific journals refereed by traditional, often conservative editors.

One of the issues the UN keeps pushing is open access to the body of scientific, technological knowledge and technical documents.

The internet is hailed as the instrument par excellence to foster this development.

Unfortunately the trade-off is that quality tends to wane when quantities of available literature grow.

The problem stated is not just limited to journals but also publications that document proceedings from (technical) conferences.

There are no clear-cut easy ways to gauge the quality of articles in printed or digital format.

The only viable way is to come up with some process that actually gleans from the refereeing process of journals and acceptance of articles for conferences some guidelines and criteria that can be construed as indicative of quality for that particular domain of knowledge.

This implies that the documents be available in digital format. For the internet the semantic web thus promises to be able to deliver this in the hopefully near future.

For printed literature we are relegated to the traditional citation counts and copy rights counts in libraries (every time an article worth copying, gets copied, copyrights are paid).

The problem for now is that there is no adequate substitute for human peer review of literature.

In terms of research funding, there should be criteria based on which funding is granted, whether this be for scientific research or other projects.

This involves some process of peer review as well.

The pressure to bow to the sheer numbers of those who want to get published should be resisted, and in these times of financial crisis, money available as grants should be spent conservatively, i.e. focusing on maximum impact AND quality.

This is VERY evident in research and project done by non-profits in the civil society sector but is not yet prevalent in the traditional academic and commercial research communities.

We second the idea of common standard ontologies for the semantic web use.

Milton Ponson
GSM: +297 747 8280
Rainbow Warriors Core Foundation
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: A structured approach to bringing the tools for sustainable development to all stakeholders worldwide
NGO-Opensource: Creating ICT tools for NGOs worldwide for Project Paradigm
MetaPortal: providing online access to web sites and repositories of data and information for sustainable development
SemanticWebSoftware, part of NGO-Opensource to enable SW technologies in the Metaportal project

--- On Fri, 5/8/09, Azamat <> wrote:

From: Azamat <>
Subject: Research Illusion
To: "[ontolog-forum] " <>, "'SW-forum'" <>
Date: Friday, May 8, 2009, 5:19 PM

By chance, i encountered Mustafa Jarrar's blog site,;, making  ontology engineering, Linked-data, web 3.0, somewhere here on the island. Never heard of him, but it is a true mind full of true thoughts.
Here is a shockingly telling extract (for me at least as i left the Academy long time ago)::
[Communications of the ACM
Volume 50, Number 11 (2007), Pages 19-21

Viewpoint: Stop the numbers game
David Lorge Parnas

As a senior researcher, I am saddened to see funding agencies, department heads, deans, and promotion committees encouraging younger researchers to do shallow research. As a reader of what should be serious scientific journals, I am annoyed to see the computer science literature being polluted by more and more papers of less and less scientific value. As one who has often served as an editor or referee, I am offended by discussions that imply that the journal is there to serve the authors rather than the readers. Other readers of scientific journals should be similarly outraged and demand change.

The cause of all of these manifestations is the widespread policy of measuring researchers by the number of papers they publish, rather than by the correctness, importance, real novelty, or relevance of their contributions. The widespread practice of counting publications without reading and judging them is fundamentally flawed for a number of reasons:

* It encourages superficial research. Those who publish many hastily written, shallow (and often incorrect) papers will rank higher than those who invest years of careful work studying important problems; that is, counting measures quantity rather than quality or value;
* It encourages overly large groups. Academics with large groups, who often spend little time with each student but put their name on all of their students' papers, will rank above those who work intensively with a few students;
* It encourages repetition. Researchers who apply the "copy, paste, disguise" paradigm to publish the same ideas in many conferences and journals will score higher than those who write only when they have new ideas or results to report;
* It encourages small, insignificant studies. Those who publish "empirical studies" based on brief observations of three or four students will rank higher than those who conduct long-term, carefully controlled experiments; and
* It rewards publication of half-baked ideas. Researchers who describe languages and systems but do not actually build and use them will rank higher than those who implement and experiment.

Paper-count-based ranking schemes are often defended as "objective." They are also less time-consuming and less expensive than procedures that involve careful reading. Unfortunately, an objective measure of contribution is frequently contribution-independent....]

Another reason for building Common Ontology Standards: to establish a safe conceptual filtering of all sorts of research head games in critical knowledge fields and publicly-funded research projects.

Azamat Abdoullaev


Received on Saturday, 9 May 2009 17:01:54 UTC