Summary various 'about:' solutions (rough first cut)


Solutions over heard in the corridor here in Cannes over the last 48 
hours over the tell me 'about:' that URI "problem" or the shallow side 
of the "bootstrap problem" (below ideas are from others; mistakes are 
mine) (*):


	Treat the URL as a URL - but add an MGET method; which unlike
	the GET method returns information about that what the GET method
	would have returned. [1]

	E.g. the URL is no longer the location of the resource - but also the
	location of the data about the location.


	Use DDDS on the URI to get a list of protocols/services available	for 
a given URI; such as the URL, the resource itself, a canonical
	name or data about the resourcce (e..g rdf).


	Add a few headers to the HTTP reply of a GET/HDRS on the
	actual resource which includes the location of the metadata
	about the resource.


	A fixed rule how to rewrite a source URLinto a URL about
	the metadata. I.e. add '.RDF' to the end to get RDF about
	the document if it exists.

Each of the above requires:

-	Creation of data about the object somewhere/somehow (the

-	Tracking (by table or by 'rule/regex') of the locatations
	or otherwise of that metadata.

	->	E.g. register them in some data base or

	->	have a rule like
		e.g. .	(.*)RL.html ---> $/1URL.html.rdf
         			(.*)URL.jpg   ---> $1/URL.jpg.rdf

Each of the above requires code and/or changes in either
server, client or both:

MGET:	extra code in the origin (web) server to understand MGET

		extra text in the document describing HTTP

		change of semantics of the URL (My opinion - see
		see thread about *exactly* that on www-rdf-rules from Patrick,
		Danny and Graham
			-> )

		extra code in the agent (client/browser) to do an MGET
		and deal with the result.

DDDS:	one or more lines in the fqdn's zone DNS server, the
		so called NAPTR lines.

		no extra documents needed - RFC standards track.

		extra code in the agent (client/browser) to do so
		use the URL returned.

HDRS:	extra code in the origing (web) server to add HTTP headers.

		simple document describing the extra headers

		extra code in the agent (client/browser) to do
		something useful with the URL of the metadata.

RULE:	simple document describing the rule.

		extra code required in the agent (client/

Some differences I perceive between them:

MGET:	Pro:	you can get at the metadata right away
			with an MGET

		-	Little code in the client/agent needed.

		Con:	protocol change

		-	not too rich - all you can do is get the
			metadata over HTTP.

		-	significant web server code needed.

		-	Corperate webserver often managed by
			marketing drones; hard to get it changed.

		-	Every server needs to be changed.

DDDS:	Pro:	All on standards track - no new
			documents needed; all done long ago.

		-	No changes except for an entry in
			DNS needed.

		-	Often just a single line is needed, especially
			for the (.*) -> $1.rtf rule/substitution case.

		-	Can do more than just 'about' data; can
			deal with other protocols, dual-use is
			possible (e.g. LSiD for the advanced
			browser of the biologist, http for the mere

		-	Network/speed/resource wise
			very cheap.

		-	Easy to lay blanket rewrite across
			all servers in the company; no need
			to change -any- web server; just
			need to add one for the metadata.

		-	NAPTR already pops up in this
			sort of use in Liberty, SIP/VoIP,
			RfId etc.

		-	Positive and Negative caching plus
			caching hierachy natural to DNS and
			already deployed.

		Con:	 DNS perceived as very 'scary'.

		-	Corperate DNS often managed by
			god like beared wizzards which mere
			employees usually do not want to anger.

		-	DNS perceived as very 'scary'.

		-	Requires 10-20 lines of DDDS algorithm
			in the client which interact with the bind/dns
			library of the system.

HDRS:	Pro:	People add headers all the time, done
			a lot. Easy to parse for the client.

		-	Though every server needs to be changed,
			the URL can refer to a central one.

		Con:	To get the metadata location you need
			an GET/HDRS over TCP first.

		-	Corperate webserver often managed by
			marketing drones; hard to get it changed.

		-	not too rich - all you can do is get the
			metadata over HTTP.

		-	Code needed in Server and in Client.

		-	Every server needs to be changed.

RULE:	Pro:	Just client side.
		-	Trivial to implement.

		Con:	 URL space polution.

		-	Corperate webserver often managed by
			marketing drones; hard to get it changed.

		-	not too rich - all you can do is get the
			metadata over HTTP.

		-	If there is no metadata you only find out
			after an full blown TCP/GET.

Did I miss anything ? Help needed here :-)



(Thanks Graham, Andy, Jeremey, Patrick, Libby, Dan, Alberto (in no 
particular order for feedback and thought provoking/shaping 

*: 	I am lumping together several concepts; i.e. you want to specify 
that for most cases,
	people mean "metadata about the resource" as the "concise bounded 
	2] or as a synonym of "RDF data object" [3][4] - i.e. "give me the 
location (URI) of the
	metadata  about that URI" or "give me the location (URI) of metadata 
about anything
	*you* know about that other URI", etc


Received on Thursday, 4 March 2004 09:26:00 UTC