- From: Danny Ayers <danny.ayers@gmail.com>
- Date: Sat, 30 Jan 2010 01:29:11 +0100
- To: John Panzer <jpanzer@google.com>
- Cc: Melvin Carvalho <melvincarvalho@gmail.com>, Semantic Web <semantic-web@w3.org>
On 29 January 2010 18:31, John Panzer <jpanzer@google.com> wrote: > That's an interesting use case. B would be a service providing additional > assertions back to A, right? So instead of the signer being an identifier > for a human, it would be an identifier for a service. Yeah, exactly so. > And A could re-syndicate B's additions back out as-is ("B says this") and/or > it could just incorporate them directly into its store and send out > resulting updates. Ditto. > (Note: In Atom-land, we're leaning on PubSubHubbub as the protocol for both > enabling real-time feed data push and computing efficient diffs to send to > clients. Diffs are entry-based, so either an entire entry is sent or not, > but clients only see the new/changed entries rather than the entire feed > when they're pushed the data. This may not be granular enough for RDF > though.) To be honest I'm not sure how things stand in RDF-land on that specific front (SPARQL 1.1, with update stuff being in discussion is something I suspect a lot of people are hanging on). PubSubHubbub seems a very rational approach to bidirectional comms, but there is at least one issue in this context. Both PubSubHubbub and Salmon are focussed on the literal wordiness of human-expressed text (with all the benefits of good old semantic markup - has anyone claimed gosh yet?). With links. The drive of the linked data stuff (I almost feel embarrassed by calling it Semantic Web these days) is to do the same stuff for entities that exist outside the Web, just named resources and relationships between them. The RSS syndication thing hit one sweet spot for a reflection of humans typing stuff, but still we have a lot more stuff on computers than blog posts. All the social stuff is begging to get tied together through the stuff we know works. There's all the, er, data, about things in the real real world that we can talk about in databases but hasn't actually been expressed on the Web. The linked data approach pulls that into the same techniques and strategies we know work for the Web. There's huge opportunity to reuse the kind of material (content-oriented) stuff along with other kinds of known material. (sorry if my language is a bit weird, got Tony Blair puppet show on the tv in the background) Ok, presumably T. Blair has a Wikipedia entry. But how do you get to the place he lives (Dubai is my guess) and how much he helped the Middle-Eastern conflict? There is straight data to answer a lot of the direct questions, which is far more accessible than human language in a (no matter how) syndicated blog post. Google is an unreliable stopgap. Back to optimism, links. Get as many URLs in there as possible. I'd like to charm you into RDF, but I don't need to - things linked together work altogether. Stuff like Salmon, PubSubHub+ (you asked Rohit about that..? :) working with HTTP works. Joining it together is the challenge...oh yes, and building compelling applications... (btw, I live in Atom-land sometimes too - check the contribs in the format spec :) Cheers, Danny. -- http://danny.ayers.name
Received on Saturday, 30 January 2010 00:29:45 UTC