W3C home > Mailing lists > Public > semantic-web@w3.org > April 2010

Re: connections

From: adasal <adam.saltiel@gmail.com>
Date: Mon, 19 Apr 2010 22:37:48 +0100
Message-ID: <r2re8aa138c1004191437sc0e336cew873fb6ae39f40db0@mail.gmail.com>
To: Kingsley Idehen <kidehen@openlinksw.com>
Cc: kkw@mit.edu, Danny Ayers <danny.ayers@gmail.com>, greg masley <roxymuzick@yahoo.com>, Semantic Web <semantic-web@w3.org>, "dbpedia-discussion@lists.sourceforge.net" <dbpedia-discussion@lists.sourceforge.net>
Kingsley,
Thank you very much. I should have the opportunity to follow this up over
this week.

Adam

On 19 April 2010 21:38, Kingsley Idehen <kidehen@openlinksw.com> wrote:

> adasal wrote:
>
>> People are categorising and linking all the time. e.g. twitter, another
>> recent thread on this list, delicious and so on.
>> What do you mean by 'ask for a link'?
>>
> Make a Linked Data SOS call, in some form: plain English (or other
> language) mail, tweet etc., a URL to nothing (i.e., I would someday like to
> access Structured Data from here), whatever. Key thing is making a request
> for the structured data you couldn't find or the query that you would like
> answered (as you did re. Tomatoes).
>
>
>  Place what they have done into linked data format and ... ?
>> But there is an obvious problem of the existing data that needs to be
>> scraped and converted, which I think would be the shortest path to linked
>> data on tomato growing.
>>
> The process of discovering, scraping, and transforming is getting more
> automated by the second in a myriad of ways.
>
> Here is an old animation showing how the process of Sponging works re.
> Generation of RDF based Linked Data from existing Web accessible resources
> [1].
>
>  For instance a search for tomato filtered by seed and gardening return 191
>> results on delicious.
>>
>
> And you can pass that through URIBurner [2][3] and start the process of
> exploring a progressively constructed Linked Data graph via the Descriptor
> Documents it generates from the Del.icio.us links.
>
>
>> How good is the data this returns?
>>
>
> Depends, Data is not only like Electricity, it carries the Subjectivity
> factory of Beauty :-)
>
>  We don't know, maybe it would be better to just use a search engine, which
>> is back to square one.
>>
>
> Of course not, what you need is Search++ (Precision Find across
> progressively assembled structured linked data meshes) [4] .
>
>  Supposing that the data is good, that is well categorised, and that there
>> is some way to manipulate the data through the api to hone it to what is
>> required, tomato seed growing in a particular region in Italy is the
>> suggestion that this query be saved somewhere as linked data?
>>
> Take a look at my collection of saved query results and queries (the URLs
> are hackable).
>
>  Ideally that query should be able to be run against DBPedia
>> interchangeably.
>>
>
> If it stumbles across the DBpedia Data Space on the way, naturally [5].
>
>  I guess there will be a meeting in the middle.
>>
>
> Always.
>
> Links:
>
> 1.  http://bit.ly/6XZy2Q -- Animation showing how transformation
> middleware will contribute to the burgeoning Web of Linked Data (first
> version of this animation was shown in 2007)
> 2. http://uriburner.com -- a service that generates structured descriptor
> documents for existing Web resources (e.g. Web Pages)
> 3. http://ode.openlinksw.com -- a browser ext. or bookmarklet that
> connects you to the public URIBurner instance (which is just a Virtuoso
> Sponger Middleware instance) or your own instance (wherever that may be
> including your private network or desktop etc.)
> 4. http://www.youtube.com/watch?v=YkzghnkuOzA -- example of Precision Find
> that leverages the burgeoning Web of Linked Data (basically query over a LOD
> Cloud Cache instance that grows progressively courtesy of public use of
> items #2 and #3 above + services like PingTheSemanticWeb and Sindice and
> other sources)
> 5. http://bit.ly/aEUdUV -- examples of how the Sponger Middleware is
> incorporated into Virtuoso's SPARQL processor via some tutorials (somewhat
> technical)
> 6. http://www4.wiwiss.fu-berlin.de/bizer/ng4j/semwebclient/ -- Semantic
> Web Client another example of the kind of crawling I referred to above
> within the context of a query that delivers "Find" functionality .
>
>
> Kingsley
>
>>
>> Adam
>>
>> On 19 April 2010 17:31, Kingsley Idehen <kidehen@openlinksw.com <mailto:
>> kidehen@openlinksw.com>> wrote:
>>
>>
>>    kkw@MIT.EDU <mailto:kkw@MIT.EDU> wrote:
>>
>>        Quoting Kingsley Idehen <kidehen@openlinksw.com
>>        <mailto:kidehen@openlinksw.com>>:
>>
>>
>>            Danny Ayers wrote:
>>
>>                Thanks Kingsley
>>
>>                still not automatic though, is it?
>>
>>            Is it "Automatic or Nothing?" .
>>
>>            What's mechanical to Person A might be automatic to Person
>>            B, both are individual operating with individual context
>>            lenses (world views and skill sets).
>>
>>            What I can say is this: we can innovate around the Outer
>>            Join i.e., not finding what you seek triggers a quest for
>>            missing data discovery and/or generation. Now, that's
>>            something the Web as a discourse medium can actually
>>            facilitate, once people grok the process of adding
>>            Structured Data to the Web etc..
>>
>>
>>            Kingsley
>>
>>
>>        Hmmm...Has anyone thought about some sort of LinkIt service where
>>        non-programmers could identify things they're linking manually
>>        and ask for a
>>        link?
>>
>>    We are gradually moving to things like this under the general
>>    banner of Annotations and Data Syncs.
>>
>>    Ironically, its 2010 and still don't even have DDE (a 1980's
>>    technology) re. data change notification and subscription etc..
>>
>>    Anyway, these things are coming, pubsubhubbub applied to linked
>>    data, annotations (simply UIs for 3-Tuple conversations) etc..
>>
>>
>>
>>        Would that open the door for identifying those that could be
>>        auto-generated and those that could build social pressure for
>>        SemWeb
>>        annotations and data owner participation?   -k
>>
>>
>>    I call this Data Spaces and Data Driven Discourse, its all coming :-)
>>
>>
>>    BTW - Twitter may also help accelerate comprehension and
>>    appreciation of what you seek. Many sources of solutions are
>>    taking shape etc..
>>
>>    Very good point, by the way!
>>
>>
>>    Kingsley
>>
>>
>>
>>
>>                On 18 April 2010 22:38, Kingsley Idehen
>>                <kidehen@openlinksw.com
>>                <mailto:kidehen@openlinksw.com>> wrote:
>>
>>                    Danny Ayers wrote:
>>
>>                        Kingsley, how do I find out when to plant
>>                        tomatos here?
>>
>>
>>                    And you find the answer to that in Wikipedia via
>>                    <http://en.wikipedia.org/wiki/Tomato>? Of course not.
>>
>>                    Re. DBpedia, if you have a Agriculture oriented
>>                    data spaces (ontology and
>>                    instance data) that references DBpedia (via
>>                    linkbase) then you will have a
>>                    better chance of an answer since we would have
>>                    temporal properties and
>>                    associated values in the Linked Data Space (one
>>                    that we can mesh with
>>                    DBpedia even via SPARQL).
>>
>>                    Kingsley
>>
>>                        On 17 April 2010 19:36, Kingsley Idehen
>>                        <kidehen@openlinksw.com
>>                        <mailto:kidehen@openlinksw.com>> wrote:
>>
>>
>>                            Danny Ayers wrote:
>>
>>
>>                                On 16 April 2010 19:29, greg masley
>>                                <roxymuzick@yahoo.com
>>                                <mailto:roxymuzick@yahoo.com>> wrote:
>>
>>
>>
>>                                    What I want to know is does
>>                                    anybody have a method yet to
>>                                    successfully
>>                                    extract data from Wikipedia using
>>                                    dbpedia? If so please email the
>>                                    procedure
>>                                    to greg@masleyassociates.com
>>                                    <mailto:greg@masleyassociates.com>
>>
>>
>>
>>
>>                                That is an easy one, the URIs are
>>                                similar - you can get the pointer
>>                                from db and get into wikipedia. Then
>>                                you do your stuff.
>>
>>                                I'll let Kingsley explain.
>>
>>
>>
>>
>>                            Greg,
>>
>>                            Please add some clarity to your quest.
>>
>>                            DBpedia the project is comprised of:
>>
>>                            1. Extractors for converting Wikipedia
>>                            content into Structured Data
>>                            represented in a variety of RDF based data
>>                            representation formats
>>                            2. Live instance with the extracts from #1
>>                            loaded into a DBMS that
>>                            exposes a
>>                            SPARQL endpoint (which lets you query over
>>                            the wire using SPARQL query
>>                            language).
>>
>>                            There is a little more, but I need
>>                            additional clarification from you.
>>
>>
>>                            --
>>                            Regards,
>>
>>                            Kingsley Idehen       President & CEO
>>                            OpenLink Software     Web:
>>                            http://www.openlinksw.com
>>                            Weblog:
>>                            http://www.openlinksw.com/blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>>                            <http://www.openlinksw.com/blog/%7Ekidehen>
>>
>>                            Twitter/Identi.ca: kidehen
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>                    --
>>                    Regards,
>>
>>                    Kingsley Idehen       President & CEO OpenLink
>>                    Software     Web:
>>                    http://www.openlinksw.com
>>                    Weblog: http://www.openlinksw.com/blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>>                    <http://www.openlinksw.com/blog/%7Ekidehen>
>>
>>                    Twitter/Identi.ca: kidehen
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>            --
>>            Regards,
>>
>>            Kingsley Idehen          President & CEO OpenLink Software
>>                Web: http://www.openlinksw.com
>>            Weblog: http://www.openlinksw.com/blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>>            <http://www.openlinksw.com/blog/%7Ekidehen>
>>
>>            Twitter/Identi.ca: kidehen
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>    --
>>    Regards,
>>
>>    Kingsley Idehen       President & CEO OpenLink Software     Web:
>>    http://www.openlinksw.com
>>    Weblog: http://www.openlinksw.com/blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>>    <http://www.openlinksw.com/blog/%7Ekidehen>
>>    Twitter/Identi.ca: kidehen
>>
>>
>>
>>
>>
>>
>>
>
> --
>
> Regards,
>
> Kingsley Idehen       President & CEO OpenLink Software     Web:
> http://www.openlinksw.com
> Weblog: http://www.openlinksw.com/blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter/Identi.ca: kidehen
>
>
>
>
>
Received on Monday, 19 April 2010 21:38:55 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:41:21 UTC