W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004

Re: RULE vs MGET

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Mon, 15 Mar 2004 12:00:17 +0200
Message-Id: <8F96A205-7667-11D8-A711-000A95EAFCEA@nokia.com>
Cc: www-rdf-interest@w3.org
To: "ext Phil Dawes" <pdawes@users.sourceforge.net>


On Mar 12, 2004, at 16:25, ext Phil Dawes wrote:

>
>
> Hi Patrick,
>
> Patrick Stickler writes:
>>
>>
>>
>> I appreciate your point of view, but I think you overexaggerate
>> the feasibility of adoption.
>>
>> Users may not be competent to write a web server or web browser,
>> but they can choose one implementation over another, based on
>> which provides the most utility or ROI.
>>
>
> Only if there is a market that satisfies this choice.
>
>> Furthermore, even though most folks mantaining the content of
>> the web do not understand the underlying infrastructure, they
>> tend to employ experts who do, and who allow them to focus on
>> the creation and management of content, not on the nuts-n-bolts
>> of how that's done.
>>
>
> The majority of content providing users don't employ experts
> directly. Instead they use generic hosting service packages to serve
> their content.
>
>> Yet those of us who actually *do* deal with the nuts-n-bolts
>> of how that is done, and strive to make life easy and maximally
>> productive for those creating and maintaining content, care about
>> things such as genericity, scalability, modularity, flexibililty,
>> extensibility and all kinds of other 'ity's.
>>
>
> Apologies, but I think you're missing the point. IMHO it's not about
> whether a user can write a webserver or even appreciate the technology
> - it's about whether he/she is able to use the technology at an
> appropriate price.
>
> I've been experimenting with URIQA at work, and I really like it. It's
> by far the cleanest solution I've seen to the term description
> descovery problem. It works really well in a corporate intranet
> environment, and really simplifies the creation of semantic-web
> agents. (e.g. we have one that dynamically handles monitoring alert
> escalation by discovering foaf SMS phone numbers and email addresses)

Cool!

>
> But I suspect that, like the original web, the creation of the
> 'internet' semantic web will be driven not by corporations, but by a
> bunch of enthusiastic amateurs experimenting with cool stuff in their
> spare time. Thanks to the web, this connected bunch of amateurs is
> very much bigger than 10 years ago and represents an opportunity to
> bootstrap a SW in a short amount of time (given an appropriately
> killer application).
>
> Unfortunately if we can't build tools for this early adopter group to
> experiment with using their existing hosting providers, then we can't
> tap into this network.
> And that's the problem: I can't build a URIQA CGI solution that
> somebody can ftp to their web space to provide descriptions of their
> terms.

Fair enough. The challenge, it seems, is to provide web server
implementations to "basic" users which allow them to define their own
descriptions independent of writing code -- yet at the same time
provide for the scalable management of resource descriptions by
very large information providers.

It may be that several approaches will have to compete, and
the best approach will become evident from real-world use.

To that end, I'm considering making the reference implementation
for URIQA a "hybrid" -- whereby both the new methods would be
supported, as well as the special header approach which would
be obtained by first issuing a HEAD request, and then the
explicitly identified description accessed using GET/PUT/etc.

Agents can then decide...

Patrick

--

Patrick Stickler
Nokia, Finland
patrick.stickler@nokia.com
Received on Monday, 15 March 2004 05:00:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 18 February 2014 13:20:07 UTC