Re: RDF query and Rules - my two cents

Patrick Stickler wrote:
> 
> On Thursday, Nov 20, 2003, at 11:49 Europe/Helsinki, ext Bill de hÓra  
> wrote:
>>
>> I don't understand the distinction above between the name of a
>> service interface and a URI. What's the difference?
>>
> 
> I.e. the difference between having to know http://example.com/blargh
> (about which you want a description) and being able to simply ask
> 
> [..]

Ok. You want URI opacity.


>> I think I would prefer a name other than MGET (maybe DESCRIBE)
> 
> Call it BLARGH for all I care, as long as the semantics are clear.

I'm glad you appreciate the value of text based protocols.



>> Presumably if the method is actually useful, it's useful on both webs.
> 
> I don't see that as a requirement. These methods are specifically
> to address *SW* functionality, so I don't see how they have to have
> any relevance whatsoever to the Web.

Yikes. Remind me again, what relevance does the SW have to the web? 
And if the method has no web relevance, why do we want to run the it 
on the web?



> But they *are* separate in very key ways, even if they share alot of
> common infrastructure.
> 
> The Web is, in terms of GET, a network of representations, linked by  
> indirect
> reference to resources which they represent. One traverses the web by  
> moving
> from representation to representation to representation.

So your premise is essentially this: a network of resource 
descriptions cannot be adequately modelled using representations on 
the deployed web without a) breaking URI opacity, b) involving 
header metadata, therefor we need a new HTTP method?


>> I'd be concerned about this a point of architecture. Web resources
>> are ideally uniform.
>>
> 
> This comment makes me wonder if we are using the term 'resource'
> in the same way.
> 
> Can you elaborate?

It's the REST thing that all resources share the same interface. 
Violating that on the web is a problem for any proposal that wants 
widespread adoption imo - I take WebDAV and HTTPR as existence proofs.



>> So you say, but how is partitioning web resources into SW and W any
>> less of a hack?
>>
> 
> Er, because the latter provides explicit, well defined semantics
> and consistent behavior, which are far more crucial to a SW agent
> than a Web agent 

Er?


>(which is more often than not, simply a human).

Really, a lot of the work I do involves non-human web agents.



>> If you're an agent capable of asking the question, why can't you
>> look in the RDF/XML to find out the answer? I thought this stuff was
>> descriptive.
>>
> 
> And because there's simply no need
> to be so sloppy and careless about SW server behavior.

Pah. Ad-hominem ojection  - who's being sloppy and careless? 
Seriously Patrick, your talking about changing web architecture. 
You'll have try harder than sniping when you're asked some questions 
about it.


>> Or why not a header? Below, I understand you're asking agents to do
>> that for M*GET resolution, but here objecting to using a header to
>> begin with as a fraile hack.
>>
> 
> Headers are fragile because (a) they can get lost in transit and
> (b) if they are not understood by a server, they can be ignored.

And if a server doesn't understand MGET?



> I have. Though you may have missed it. The first incarnation of
> the Nokia Semantic Web Server took the header approach, and it
> resulted in precisely the kinds of destructive, unintended behavior
> I've documented.

I did miss it. Links?


>> Well, some might argue that the SW is a pretty narrow usecase for
>> creating a new verb on the web. WebDAV added new verbs, it didn't
>> work out so well in retrospect.
>>
> 
> Really, I use numerous WebDAV enabled servers daily. 

So do I. But my point still stands. The SW is a narrow usecase and 
you'll need to make a clearer case that the deployed Web is 
fundamentally incapable of supporting it.


Bill de hÓra

Received on Thursday, 20 November 2003 06:16:31 UTC