Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

DNS trickery is the ultimate step for a fully flexible architecture.
Unfortunately it requires to have some admin rights over your own
domain. Something uber difficult in companies.

A workaround would be to create a top domain name, something like
..uris (or more realistically cooluris.net), with an automatic
delegation of  its subdomains to official owners
of an existing domain name.
For example the owner of datao.net would be able to get full access to
the subdomain datao.net.uris (or datao.net.cooluris.net) for its URIs.
And all the URIs of his or her RDF data would be in that domain.
Then he would either CNAME it so it resolves to t-d-b.org or his/her
own 303 system.
And that service would then 303 to the real web servers of datao.net

This would make a clean separation of contexts between URIs and URLs.

I advocate the creation of a .uri top domain name. That would be the
domain of semantic data. But because a top domain name is not
something easy to get, we could consider something more classical.
Maybe .cooluris.net

The crucial point is that this domain name will delegate its
subdomains to official owners of a real domain name (i.e datao.net can
claim full control of datao.net.cooluris.net, for example).

Given the fact that these services (CNAME to 303 + 303 to web data)
have default behaviour that make things simple for beginners, we would
have an efficient infrastructure.

On Thursday, July 9, 2009, Christopher St John <ckstjohn@gmail.com> wrote:
> On Thu, Jul 9, 2009 at 10:46 AM, Pierre-Antoine
> Champin<swlists-040405@champin.net> wrote:
>>
>> However, some people will still be concerned about naming their resources
>> under a domain that is not theirs. That is not only a matter of
>> URI-prettiness, but also of relying on an external service, which may cease
>> to exist tomorrow.
>>
>
> I'm switching uridirector.praxisbridge.org[1] to optionally
> include accept headers in choosing a template. That should
> give people a quick low-effort[2] way to get up and running
> without having to warp their URIs to match a third party
> service (and without having to commit to using the
> service once another option is available)
>
> It seems pretty clear that people should (a) only mint URLs
> in domains the control and (b) maybe think about including a
> sub-domain in the URIs for specific data sets (and thereby
> get the power of the domain name system on their side
> when they need to move the data later on)
>
> Note that following (a) doesn't mean you need to run your
> own server, it's sufficient to just register the domain.
> Smart-ish redirectors (third party or local) will then allow
> you a lot of flexibility in choosing exactly where the data
> is located.
>
> -cks
>
>
> [1] Like purl o t-b-g, only with host name header
> recognition so you can CNAME your own domains over and
> maintain complete control over your URI, see previous email:
> http://lists.w3.org/Archives/Public/public-lod/2009Jul/0072.html
> It's not quite fully baked, but it's getting there.
>
> [2] You need to know what a CNAME is, and have access to
> your DNS configuration. But you're not minting URLs in domains
> you don't have administrative control over, are you?
>
> --
> Christopher St. John
> cks@praxisbridge.com
> http://praxisbridge.com
> http://artofsystems.blogspot.com
>
>

Received on Thursday, 9 July 2009 22:52:08 UTC