- From: Paul Prescod <paul@prescod.net>
- Date: Mon, 03 Feb 2003 14:37:29 -0800
- To: Jeff Bone <jbone@deepfile.com>
- CC: Sandro Hawke <sandro@w3.org>, www-tag@w3.org
Jeff Bone wrote: > > Almost totally agreed w/ Paul on this, but two minor nits --- tangential > but food for thought. > > On Monday, Feb 3, 2003, at 15:55 US/Central, Paul Prescod wrote: > >> Those billions of pages are not semantic-web processable or we >> wouldn't be arguing about how to build the semantic web. They are the >> things talked ABOUT by the semantic web, not the nodes in the web. >> Because the are 100% semantically ambiguous I would say that they are >> totally valueless as part of the web. > > Paul, you might be overstating the case just a bit. If *could be* that > services e.g. Google could perform a useful function in mining and > making (minimal) semantic information about the existing (non-semantic) > Web available in a machine-usable fashion. I.e., to some extent your > comment about the existing Web being the subject of the semantic Web > belies your comments about the value of the existing Web. Fair enough. But if the new application is Web-smart (as Google is), then there would probably be URIs of the form: http://www.semgoogle.com/about=http://www.prescod.net *That* would be the URI for the semantically meaningful information from "http://www.prescod.net". And even so, it would be totally ambiguous whether it is information about a web page, a person, a company, etc. >> Plus, not that URIs are not expensive. They are cheap. Making new >> ones is easy. We need to make new ones to have a home for the RDF data >> anyhow. So what. > > > I wonder about this. Google knows about ~ 3B "pages." Pages are expensive. URIs are cheap. Wasting 3B URIs is not a tragedy because I can generate 3B new ones with a ten line Python script. I can _even_ give them HTTP representations in roughly ten more lines of code (including five lines of RDF cruft;)). Paul Prescod
Received on Monday, 3 February 2003 17:37:56 UTC