Internet Data Base

L.S.,



Problem:
The internet has a problem. The problem is data. Data is currently 
stored (spread) in an unrelated way in separate files or separate 
(unrelated) databases.



Solution:
What about making a specification for storing data in a tree-based 
(XML-like) database? By enabling linking between data (nodes), we ensure 
that data can be unique (stored and maintaned in 1 location). We can use 
XPath to retrieve the data from this database.



How:
The data is stored as XML on different servers.

A trusted party can perhaps create/maintain root nodes (rootnodenames) 
(like categories) to be independent from ip addresses and domain names.

The data is transmitted as XML (encoded in unicode, UTF-8) over http. 
(possibly gzipped encrypted)

The data is queried by XPath (slightly modified because of the links and 
spread over multiple machines)

The data is secured by some access control mechanism per subtree/node.

We'd need to define a way to add/modify/delete data from this database.

We'd need to define a whole set (library) of DTD's/Schemas/Relax NG's 
for specific subsets of data (a photoalbum, corporate contact 
information, whatever), this is important for machine-readability and 
ensuring metadata stored with data (we want loads of metadata, lack of 
metadata is what makes the internet hard to search currently))

(We could also define how different database servers have 
contact/interact, maybe in a p2p (peer to peer) way, with a method to 
find each other (rendezvous) and methods to cache data locally.)




If you have any comments on this idea feel free to bash it, praise it, 
or improve it.

Kind Regards, Teun van Eijsden

Received on Thursday, 12 December 2002 10:34:35 UTC