Re: Summary various 'about:' solutions (rough first cut)

On 08/03/2004, at 2:54 PM, Patrick Stickler wrote:

>> ->	MGET
>> 	Treat the URL as a URL - but add an MGET method; which unlike
>> 	the GET method returns information about that what the GET method
>> 	would have returned. [1]
>> 	E.g. the URL is no longer the location of the resource - but also the
>> 	location of the data about the location.
> Er. Not quite. The URI denotes a resource. One can use GET to request
> a representation of that resource. Or one can use MGET to request a
> concise bounded description of that resource. There is no need to posit
> any concept of "location"; either of the resource or its description.

Thanks - what I meant was more that in combination with some of the
other methods

->	MGET assumes that there is a place (i.e. location) you can 'get'
	things from.

Whereas some of the other samples also work on a file you happen
to find on your disk (e.g. the creative commons case).

>> Each of the above requires:
>> -	Creation of data about the object somewhere/somehow (the
>> 	metadata)
>> -	Tracking (by table or by 'rule/regex') of the locatations
>> 	or otherwise of that metadata.
>> 	->	E.g. register them in some data base or
>> 	->	have a rule like
>> 		e.g. .	(.*)RL.html ---> $/1URL.html.rdf
>>         			(.*)URL.jpg   ---> $1/URL.jpg.rdf
> But with some approaches, such as URIQA, the relation between
> resource and description is left up to each web authority rather
> than mandated globally or requiring some centralized machinery.

Absolutely - in fact I think that the 'Each of the above' is key - i.e
all the approaches are very similar in that respect.

> In fact, URIQA differs from the other approaches in that it is
> not actually manditory that each description be given a distinct
> URI (though a distinct URI would be needed if one were to make
> statements about the description).

Thanks! - this is a good point -I'll send out an updated version of the
comparision list later today.

>> MGET:	extra code in the origin (web) server to understand MGET
>> 		extra text in the document describing HTTP
> ??? Do you mean modification of the HTTP spec? I don't see that
> as a requirement. The specification of WebDAV is distinct from HTTP.
> I see no reason why the specification of URIQA would not be distinct
> from HTTP.

Will make this clear - what I meant was - some sort of document 
what 'MGET' is.

.. snip - txt about URL of resource beeing also URL of metadata..snip..
> I'm sorry, but I don't see how the reference thread has anything to do
> with your opinion that URIQA changes the semantics of URLs. Can you
> elaborate specifically about why you hold such an opinion?

Ack - will do so in a separate email; and reword.

>> 		extra code in the agent (client/browser) to do an MGET
>> 		and deal with the result.
> ???
> I don't see how URIQA imposes any "extra" code in the agent (and, 
> believe

Well - with the Agent I mean the thing extracting the information; i.e.
it needs to do a

	write( socket, "MGET ...."

at some point - in addition to its 'GET' and 'HEAD' or whatever else it 
does ?

> If anything, URIQA is the lightest, most efficient approach insofar
> as client implementation is concerned.

Actually I can see the people who add a Header (which are parsed 
already) or
a <!-- field --> in the HTMl argue the same :-)

>> HDRS:	extra code in the origing (web) server to add HTTP headers.
>> 		simple document describing the extra headers
>> 		extra code in the agent (client/browser) to do
>> 		something useful with the URL of the metadata.
> Doesn't scale. Leads to potentially catastrophic side effects if the
> server misunderstands or ignores the headers. See the FAQ section
> of the URIQA spec for more.

Personally I could not agree more.

>> MGET:	Pro:	you can get at the metadata right away
>> 			with an MGET
>> 		-	Little code in the client/agent needed.
> No special code whatsoever in the client/agent needed.

Respectfully I disagree - anything which needs to get at the
descriptions will need to do 'MGET' and have some code (no
matter how simple) to get it.
>> 		-	Every server needs to be changed.
> No. Not every server.
> Only those servers used to publish resource descriptions.

I'll make that clear in the next rev.

>> 		-	Corperate DNS often managed by
>> 			god like beared wizzards which mere
>> 			employees usually do not want to anger.
> Much to my own personal disappointment (really) I think this
> is the nail in the coffin for the DDDS approach.

I can understand that vision.

> DNS is also widely (and probably justly) considered fragile and
> when DNS fails, the network dies, so repurposing DNS in this
> way seems dangerous at best.

Luckily those who actually maintain DNS general understand it
enough to know that it is far from fragile, AND:

> Perhaps what is needed is a "parallel" DNS, isolated from the
> "real" or "traditional" DNS, but which provides for less critical
> functionality such as DDDS, without risk of impacting the critical
> services.

That the notion of a 'parallel' dns is kind of uncalled for - as DNS
essentially operates in such a fashion. E.g. if you do a DIG through you'll note that the real machines/email DNS zones
are very separate from the DDDS zones for that very reason; i.e. they
are parallel where needed.

> I hope what I provided above is considered as "help" ;-)
Absolutely - thanks a lot !


Received on Monday, 8 March 2004 09:14:16 UTC