- From: David Powell <djpowell@djpowell.net>
- Date: Tue, 9 Mar 2004 22:11:06 +0000
- To: www-rdf-interest@w3.org
I'm unsure about MGET because it seems to segment URI space, so that applications that deal with data and metadata need to know whether they should be using GET or MGET. This isn't possible for applications such as HTML, XSLT, Babelfish, archive.org, etc.. I think that it is valuable for metadata to be obtainable via GET, because there are a lot more agents in the wild that support GET than MGET. However I agree that some sort of extension is needed to allow clients to obtain metadata about a resource, so I was thinking about how MGET could be made more GET-friendly, so that the metadata is really part of the web. How about if it was MANDATORY for responses to MGET to have a Content-Location header giving a URL which could be used to retrieve the metadata via GET. In practise the URIQA implementation provides GET'able URIs for the metadata anyway, and I imagine that this would be a fairly common technique to ensure compatibility with browsers so it should be cheap to implement. Ensuring GET access to MGET content has a number of advantages. It wouldn't solve the problem of how to discover metadata for legacy clients, but it would allow clients incapable of performing MGET requests to still be able to access and process metadata if the URL for it is discovered on the client's behalf by an MGET-enabled client. It allows dual-implementations of metadata discovery, eg a webpage could support MGET, and use one of the other methods such as a <link> tag. It also makes it possible to obtain meta-metadata by performing an MGET on the URL given as the Content-Location. In this scenario, perhaps an MHEAD method would be useful? -- Dave
Received on Tuesday, 9 March 2004 17:15:31 UTC