Re: [ontolog-forum] Research Illusion

You are spot on with your 3 point list. In several fields of human activity, and in particular any activity which is geared precisely at changing social attitudes and behaviors in a broad spectrum of social sectors will face this resistance to change and acceptance of new paradigms.

Technology assessment, appropriate technology transfer, corporate management strategies dealing with change processes are parts of a growing, diverse field for increasingly interconnected and overlapping disciplines dealing basically with what makes humans tick and how to tinker with our internal mental clocks to change their ticking or chiming behaviors.

Even if we know based on extensive years of research what the significant variables or parameters are which determine individual or group behaviors, it is still (mathematically) very hard to predict the outcomes of intentions to have individuals or groups accept new paradigms or technologies, or adapt to inevitable change, precisely because of free will.

 The UN and European Union embrace and promote the idea of open access information, open source software and open access to the body of scientific knowledge, yet in practice the roads to achieving these noble objectives are fraught with entrenched resistance from many sectors, and your list of three points applies!

In my humble opinion many of these stem from the reluctance to be truly open-minded, which implies keeping all options open, letting free will and the generally accepted rules of democratic society related to majority rule prevail.

We can make a case for a new paradigm, create standards for new technologies and then hope for the desired outcome.

What has changed though because of the internet, is the way we act and react collectively, and a company like Google e.g. spends hundreds of millions tracking just such collective behaviors.

I remember being asked in a previous thread about web business models applicable to the utilization and acceptance of linked data cloud, applications and the semantic web.

There aren't any that apply in particular. A useful list can be found at;
http://digitalenterprise.org/models/models.html.

If we limit ourselves to the intersection of the field of linked data/semantic web and open access to body of scientific knowledge in online repositories or digitally available publication, the case I made earlier for a quality control standard system based on open access and open source applies.

When we expand to encompass the whole of the internet I doubt whether even the company with the biggest amount of resources and/or hype can pull it off.

What I suggest is we focus on the small markets of information end-users that are interested in open access to the body of scientific knowledge and technology first. This for now is more than enough to handle.

Once we have the hundreds of scientific disciplines all putting their raw and pre-formatted data into cloud and repositories, utilizing generalized schemes for ontologies, software tools etc. we can hope to expand to new markets of end-users.

This is also the path my own organization is taking in our bold endeavor of ICT empowerment of all stakeholders in sustainable development.

Because even though I am open-minded, I know that for now that is the only area where there is a set of reliable models and system of metrics available to gauge progress and measure success.

Milton Ponson

--- On Sun, 5/10/09, John F. Sowa <sowa@bestweb.net> wrote:

From: John F. Sowa <sowa@bestweb.net>
Subject: Re: [ontolog-forum] Research Illusion
To: "[ontolog-forum]" <ontolog-forum@ontolog.cim3.net>
Cc: "'SW-forum'" <semantic-web@w3.org>, "Mustafa Jarrar" <mjarrar@cs.ucy.ac..cy>, jeremy@topquadrant.com, "Sören Auer" <auer@informatik.uni-leipzig.de>, "Pieter De Leenheer" <pdeleenh@vub.ac.be>
Date: Sunday, May 10, 2009, 3:49 PM

Azamat,

There is a fundamental problem about evaluating new ideas of any
kind:  People always interpret new information in terms of their
previously established mental patterns and structures.

That point has several implications:

 1. Anything that fits previously established patterns will be
    quickly perceived, interpreted, accepted, and added to the
    old patterns.

 2. Anything that doesn't fit the old patterns will be "anomalous".
    It won't fit, it will create "cognitive dissonance", and it
    will be ignored or rejected.

 3. Even worse than outright rejection is the misinterpretation
    caused by forcing the new information into some older
    pattern that is inappropriate and misleading.

This is true of all kinds of learning from infancy to the most
sophisticated scientific research.  One of my colleagues at IBM
submitted a paper to a conference, and one of the reviewers
rejected it with the comment "I never saw anybody do anything
like that before."  Apparently, they wanted new research, but
only if it fit the old paradigms.

Eventually, the author managed to get the paper accepted by
different reviewers, and the paper became a minor classic of
its kind.  This is just one of many examples of "reviewer roulette",
which has plagued every branch of science and engineering.  For
the humanities, the problem is even worse because the criteria
for testing ideas by experiment are much harder to apply.

The same kinds of prejudices plague entire fields, not just
individual reviewers.  During the 1970s and '80s, another colleague
at IBM, Fred Jelinek, was the manager of a group that used statistics
to analyze natural languages.  In those days, the amount of data
they had to process was so large that they swamped a large IBM
mainframe.  So they had to run their programs at 3 o'clock in
the morning, when they could get enough computing power.

I remember that one of the researchers who worked for Fred had
developed a parser that used statistics to guide the choice of
which option to follow.  She submitted the paper to IJCAI in 1981,
and it was rejected with the statement "Statistics is not AI."

By the 1990s, personal computers were as powerful as the mainframes
of the early 1980s.  So the same kinds of techniques could be run
on PCs, and Fred Jelinek became a guru instead of a crank.  Now,
statistics is the so-called "mainstream", and papers are often
rejected if they don't use statistics.

Some comments on earlier comments:

> SA: "I have the vision that research communities' crowd intelligence could be employed in the Web 2.0 style for deciding about research funding".
> 
> MB: "...we see people can vote resources...Allowing people to add ontology-based annotations is just similar and would be another step forward."
> JC: "Google scholar provides citation counts, which while still a fairly rough measure, does include an idea of the importance of any piece of work."
> 
> PDeL: "I agree with the value of the wisdom of the crowd effect in many cases, however it should be controlled somehow to prevent the emergence of "foolishness of the crowd".
> 
> MP: "We second the idea of common standard ontologies for the semantic web
> use."

All of those techniques can be helpful, but none of them are magic.
They still won't overcome the fads of the "mainstream", and they still
can't distinguish a truly significant innovation from the latest fad
or somebody's pet idea "that just ain't so".

> AA: I incline to think that the "crowd intelligence" or "foolishness of the crowd" may explain the nature of the "phenomenon", and a canonic world model encoded as a machine-understandable common ontology standard of meanings may allow to head off it at all.

Perhaps.  But my greatest fear is that the choice of standard is more
likely to be determined by the latest fad or by the organization that
has the most hype and money to throw at it.  To avoid embarrassing
the guilty, I won't mention specific examples.  But for anybody who
advocates a common standard ontology, I would say

    Be careful what you wish for.

If any readers think that they have an ideal ontology in mind, I'd
like to ask one question:  Do you believe that you have sufficient
hype and money to make your preference become the new mainstream?

John Sowa

Received on Sunday, 10 May 2009 17:55:20 UTC