W3C home > Mailing lists > Public > www-tag@w3.org > February 2003

RE: Proposed issue: site metadata hook

From: <Patrick.Stickler@nokia.com>
Date: Tue, 18 Feb 2003 08:19:20 +0200
Message-ID: <A03E60B17132A84F9B4BB5EEDE57957B5FBB28@trebe006.europe.nokia.com>
To: <chris@w3.org>, <www-tag@w3.org>
Cc: <timbl@w3.org>


> PSnc> What you are talking about is using technical means to affect 
> PSnc> a social situation by having the "solution" impinge upon the
> PSnc> rights of the server *owner*.
> 
> No, I am not. But you are.
> 
> PSnc> The technology should not mandate such social issues.
> 
> I agree, so your proposal to enforce that anyone who does not own a
> server cannot have any metadata is an attempt to impose a social issue
> and should be resisted.

It seems we are talking past each other.

I'm going to suggest that we both are in favor of the architecture
*allowing* all users to be able to control their own personal web
spaces, even when they do not own the server.

But that the architecture itself does not mandate specific rights
of control for all users against the wishes of the server owner.

Thus, if the server owner wishes to allow user-specific control,
the architecture should take that into consideration, and support
that level of resolution.

But the architecuture should not permit users to circumvent the
explicit wishes of the server owner.

Yes?

> Lets consider an architecture where the corporation owns / 
> and accounting
> owns /corporate/accounting and marketing owns /comm/pr
> 
> Lets assume that the corporation decides that it does not want /
> crawled. Lets assume that marketing wants /comm/pr crawled.

Then I would say too bad for /comm/pr. If the owner says "this
server will not be crawled" then it shouldn't, no matter what
any user says.

HOWEVER, if the corporation is saying "only areas explicitly 
specified to be crawled, by the users responsible for those
areas, may be crawled" then that is something different.

I understand (now a bit better) that what you are asking for
is the architecture to be able to allow users to express their
wishes over their own content, and for robots to take that
information into account *IFF* the server owner permits it to.
(it's the IFF I thought you were leaving out...)

But that the present architecture is too coarse to allow for
efficient management of user-specific wishes in that regard
and thus needs to be refined.

Right?

> You seem to worry that I want to empower marketing to override the
> settings on / wheras in fact I want them to be able to control their
> own little bit and not be able to control everyone elses bits.

Fair enough, but I've never been talking about one user controlling
another user's space. I've been talking about a user overriding
the control of the server owner. And that is what I took issue with.
Perhaps I misunderstood you -- I'm still not completely sure I didn't.

A specific question to help me determine that: If the server owner
says "no crawlers at all on this server" and a tenant says "all my
own content can be crawled", should the tenant's content be crawled?

> PSnc> And as a final point, wouldn't the ability to express all the
> PSnc> complexity of site configuration and the rights of tenants,
> PSnc> etc. be so much easier if one could just use RDF,
> 
> I don't recall excluding this.
> 
> PSnc> and then
> PSnc> just ask the site to tell us if e.g. a robot can inspect
> PSnc> the web space of tenant "John Doe"...???
> 
> And you would do that how?

By obtaining and inspecting the RDF description of the site,
examining those properties that describe robot behavior and
recursively obtaining and inspecting the RDF descriptions of
whatever resources are relevant to answering the question,
including the description about the particular user space, etc.

> PSnc> Why do we need anything more than the semantic web extensions
> PSnc> to the present web architecture
> 
> If I knew clearly what those were then I might be able to answer you.
> But at present there does not seem to be a list of them.

They have been mentioned repeatedly in this very thread:

MGET {URI}     returns an RDF description of the resource denoted by the URI
MPUT {URI}     adds statements to the knowledge base describing the resource
MDELETE {URI}  removes statements from the knowledge base describing the resource
MUPDATE {URI}  replaces/adds statements to the knowledge base describing the resource

Patrick
Received on Tuesday, 18 February 2003 01:19:48 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:16 GMT