W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Re: RDF query and Rules - my two cents

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Thu, 20 Nov 2003 15:35:48 +0200
Cc: www-rdf-interest@w3.org
To: =?ISO-8859-1?Q? "ext_Bill_de_h=D3ra" ?= <dehora@eircom.net>
Message-Id: <736087BE-1B5E-11D8-8364-000A95EAFCEA@nokia.com>

On Thursday, Nov 20, 2003, at 13:16 Europe/Helsinki, ext Bill de hÓra 

> Patrick Stickler wrote:
>> On Thursday, Nov 20, 2003, at 11:49 Europe/Helsinki, ext Bill de hÓra 
>>  wrote:
>>> I don't understand the distinction above between the name of a
>>> service interface and a URI. What's the difference?
>> I.e. the difference between having to know http://example.com/blargh
>> (about which you want a description) and being able to simply ask
>> [..]
> Ok. You want URI opacity.

Absolutely. I consider that a core requirement for achieving
a trully scalable, flexible, and ubiquitous SW.

>>> I think I would prefer a name other than MGET (maybe DESCRIBE)
>> Call it BLARGH for all I care, as long as the semantics are clear.
> I'm glad you appreciate the value of text based protocols.


>>> Presumably if the method is actually useful, it's useful on both 
>>> webs.
>> I don't see that as a requirement. These methods are specifically
>> to address *SW* functionality, so I don't see how they have to have
>> any relevance whatsoever to the Web.
> Yikes. Remind me again, what relevance does the SW have to the web? 
> And if the method has no web relevance, why do we want to run the it 
> on the web?

Because the web offers a globally deployed, proven infrastructure
for inter-agent communication.

Just because those agents might in some cases use a specialized
language (i.e. a few new, special verbs) doesn't mean they are
not web agents benefitting from the rest of the web architecture.

>> But they *are* separate in very key ways, even if they share alot of
>> common infrastructure.
>> The Web is, in terms of GET, a network of representations, linked by  
>> indirect
>> reference to resources which they represent. One traverses the web by 
>>  moving
>> from representation to representation to representation.
> So your premise is essentially this: a network of resource 
> descriptions cannot be adequately modelled using representations on 
> the deployed web without a) breaking URI opacity, b) involving header 
> metadata, therefor we need a new HTTP method?

That sounds about right.

>>> I'd be concerned about this a point of architecture. Web resources
>>> are ideally uniform.
>> This comment makes me wonder if we are using the term 'resource'
>> in the same way.
>> Can you elaborate?
> It's the REST thing that all resources share the same interface. 
> Violating that on the web is a problem for any proposal that wants 
> widespread adoption imo - I take WebDAV and HTTPR as existence proofs.

But REST is about representations. The SW can be very RESTful yet
still have special needs, and hence extensions, that are out of
scope for REST.

>>> So you say, but how is partitioning web resources into SW and W any
>>> less of a hack?
>> Er, because the latter provides explicit, well defined semantics
>> and consistent behavior, which are far more crucial to a SW agent
>> than a Web agent
> Er?
>> (which is more often than not, simply a human).
> Really, a lot of the work I do involves non-human web agents.
>>> If you're an agent capable of asking the question, why can't you
>>> look in the RDF/XML to find out the answer? I thought this stuff was
>>> descriptive.
>> And because there's simply no need
>> to be so sloppy and careless about SW server behavior.
> Pah. Ad-hominem ojection  - who's being sloppy and careless? Seriously 
> Patrick, your talking about changing web architecture. You'll have try 
> harder than sniping when you're asked some questions about it.

I gave some explicit use cases of why this doesn't work in other

The point is that headers or content negotiation alone cannot
ensure that the RDF/XML you get back is the RDF/XML that will
tell you what you need/what to know.

>>> Or why not a header? Below, I understand you're asking agents to do
>>> that for M*GET resolution, but here objecting to using a header to
>>> begin with as a fraile hack.
>> Headers are fragile because (a) they can get lost in transit and
>> (b) if they are not understood by a server, they can be ignored.
> And if a server doesn't understand MGET?

Then it doesn't. If the server doesn't understand the WebDAV
methods, then you can't interact with it in that fashion. If
it doesn't understand the SW methods, then you can't interact
with it in that fashion.

That's how extensions work, right?

>> I have. Though you may have missed it. The first incarnation of
>> the Nokia Semantic Web Server took the header approach, and it
>> resulted in precisely the kinds of destructive, unintended behavior
>> I've documented.
> I did miss it. Links?

It was discussed in length on this list around the beginning of
the year. I could dig out the code from my drawer of backups, though
I think that the use cases I've outlined are sufficient for 
the flaws in that approach.

>>> Well, some might argue that the SW is a pretty narrow usecase for
>>> creating a new verb on the web. WebDAV added new verbs, it didn't
>>> work out so well in retrospect.
>> Really, I use numerous WebDAV enabled servers daily.
> So do I. But my point still stands. The SW is a narrow usecase and 
> you'll need to make a clearer case that the deployed Web is 
> fundamentally incapable of supporting it.

I continue to try to make a clearer case, and will continue to.

And if I or others who share these views fail to convince the
web community, then we SW folks can simply deploy our extended servers
and those who don't care about that "narrow usecase" can just ignore us.

Received on Thursday, 20 November 2003 08:38:42 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:48 UTC