W3C home > Mailing lists > Public > semantic-web@w3.org > July 2010

Re: Show me the money - (was Subjects as Literals)

From: Paul Gearon <gearon@ieee.org>
Date: Fri, 2 Jul 2010 06:09:18 -0700
Message-ID: <AANLkTinxcF7pzBh6E2VoWPFwDZ6xTAHimFPs2_-1mrjT@mail.gmail.com>
To: bnowack@semsol.com
Cc: Semantic Web <semantic-web@w3.org>, Linked Data community <public-lod@w3.org>
On Fri, Jul 2, 2010 at 2:01 AM, Benjamin Nowack <bnowack@semsol.com> wrote:
>
> On 01.07.2010 22:44:48, Pat Hayes wrote:
>>Jeremy, your argument is perfectly sound from your company's POV, but
>>not from a broader perspective. Of course, any change will incur costs
>
> Well, I think the "broader perspective" that the RDF workshop
> failed to consider is exactly companies' costs and spec
> marketability. The message still sent out is a crazy (or
> "visionary" ;) research community creating spec after spec, with
> no stability in sight.

Not being a recipient of the message, I'm not in an appropriate
position to judge there. However, I *can* say that the workshop did
indeed consider companies' costs and spec marketability. There were
numerous proposals that had some interest, but were ultimately
ignored, with cost being the single biggest reason.

> And with the W3C process not really
> encouraging the quick or full refactoring of existing specs (like
> getting rid of once recommended features), each spec adds *new*
> features

A lot of the discussion at the workshop was about *removing* features.
A number of things have revealed themselves as a bad idea, and the
community in general wants to be rid of them. However, no one wants to
break existing systems, so a notion of "weakly deprecating" these
features was introduced instead.

Similarly, for new features there was a lot of discussion about how
these could be introduced without breaking anything. In a number of
cases, proposals were abandoned simply because it would impact
existing systems too much.

Where new features did receive support, it was due to widespread
deployment despite there being no standard. Turtle and names graphs
are the obvious ones here.

The message that came out may have been quite different to this, but I
think that the majority of the workshop was extremely conservative.
Indeed, there were representatives from several companies and open
source implementors who were there specifically to make sure that
nothing too radical would receive serious attention.

> and increases the overall complexity of identifying
> market-ready Recs: RIF seems to be a replacement for OWL, but
> OWL2 was only just Rec'd. Which should I implement? RDFa 1.1 and
> SPARQL 1.1 both look like implementation nightmares to me. Current
> RDF stores can't even be used for semantic feed readers because of
> poor "ORDER BY DESC(?date)" implementations, but the group is
> already working on query federation. RDFa is becoming the new
> RSS 1.0, with each publisher triggering the development of
> dedicated parsers (one for SearchMonkey data, one for RichSnippets,
> one for Facebook's OGP, etc., but a single interoperable one? Very
> hard work.) Something is wrong here. Featuritis is the reason for
> the tiny number of complete toolkits. It's extremely frustrating
> when you know in advance that you won't be able to pass the tests
> *and* have your own (e.g. performance) needs covered. Why start at
> all then?
>
> The W3C groups still seem to believe that syntactic sugar is
> harmless. We suffer from spec obesity, badly. If we really want to
> improve RDF, then we should go, well, for a low-carb layer cake.
> Or better, several new ones. One for each target audience. KR pros
> probably need OWL 2.0 *and* RIF, others may already be amazed by
> "scoped key-value storage with a universal API" (aka triples + SPARQL).
> These groups are equally important, but have to be addressed
> differently.

I think that part of the problem is the spec process itself. For
instance, RDF 1.0 has too many features that we don't want (eg.
reification), but how does something like that get removed? RDF 1.0
can't be modified at this point. All we can do is write something new
that builds on previous versions. This is why features can only be
"deprecated" rather than removed. So even if RDF 1.1 doesn't introduce
any new features at all, and only removes things (through deprecation)
it still adds to the document bloat.

If the process allowed documents to be culled and reworked, then I
think they would be.

> Our problem is not lack of features (native literal subjects? c'mon!).

You'll note that the group specifically said that this wasn't worth working on.

(Incidentally, that's not a yes or a no. The group couldn't make those
decisions. This process was simply about identifying if there is
enough interest in updating RDF, and what should be worked on if it
is).

> It is identifying the individual user stories in our broad community
> and marketing respective solution bundles. The RDFa and LOD folks
> have demonstrated that this is possible. Similar success stories are
> probably RIF for the business rules market, OWL for the DL/KR sector,
> and many more. (Mine is agile, flexi-schema website development.)
>
> RDF "Next Steps" should be all about scoped learning material and
> deployment. There were several workshop submissions (e.g. by Jeremy,
> Lee, and Richard) that mentioned this issue, but the workshop outcome
> seems to be purely technical. Too bad.

I don't think that the outcome was purely technical, but the report
may appear that way.

Overall, I got the sense that the outcome of the workshop was to:

- deprecate several features in RDF/RDFS that have proven to be a bad idea.
- standardize common practice.
- NOT change anything in a way that would break existing systems.

There were lots of really worthwhile proposals that didn't seem to
make the recommendations cut simply because they didn't meet these
criteria.

Regards,
Paul Gearon
Received on Friday, 2 July 2010 13:09:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 07:42:21 UTC