W3C home > Mailing lists > Public > public-lod@w3.org > October 2008

Re: web to semantic web : an automated approach

From: (wrong string) रविंदर ठाकुर (ravinder thakur) <ravinderthakur@gmail.com>
Date: Sun, 26 Oct 2008 22:05:51 +0530
Message-ID: <617073f10810260935k3634d03dpe7cdec04561d19a9@mail.gmail.com>
To: metadataportals@yahoo.com
Cc: public-lod@w3.org
>>>You cannot expect information or more precise raw data to produce
meaningful semantic content if it was never produced in a format allowing
for >>>semantic output.

I have big hopes on the NLP systems here. They are pretty advanced these
days and can only improve in the near/far future. Not all data on the
Internet is written in manner that would be difficult for NLP systems to
parse(eg the one using slang etc). Many articles such as ones written on
wikipedia or say times.com or many blogs are of good quality and a large
portion of them should be understandable by NLP systems.

>>>We will have to live with the fact that maybe more than half of all
"content" on the Web will never lend itself to conversion into useful
semantic >>>content

The good thing about web is that many important pieces of information are
duplicated at multiple places. If NLP systems faild to retrieve some
information from one source, there are other sources where it might succedd.
The good thing about symentic web is that we only have to _get information
right only once_ (or more generally speaking, right count should be more
than wrong count).  This is unlike the current search engines which keeps
track of information from _all_ the sources. IMHO if we are able to
understand even quater of all the inforamtion on web, semantic web would be
an unparalled success.
Received on Sunday, 26 October 2008 16:36:26 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:15:53 UTC