Re: Hackers - Re: Schema.org considered helpful

Hi
I haven't had time to follow link
I expect there is an issue of how to think about a semantic web.
I can see Google is about ruthlessly exploiting the atomisation of the
Bazaar. Of course from within the walls of their own Cathedral.
Recall is in inverse proportion to accuracy.
I think web behaviours influence our own (mind) behaviours. We respond
to environment. Hints from that environment are assimilated very
quickly.
The web is an (absorbing for important reasons undiscussed here) environment.
I rely on Google very happily. It brings fragments some times random
often according to rules I half guess at. This is how it deals with
recall/accuracy.
SemWeb should be different. It is machine/machine. But there is an
ultimate human arbiter of relevance and quality of data for human
consumption. SemWeb needs a series of a priories - the ontologies.
It seems there are two human arbiter questions.
1. What data would I like to see - describe a coherent package of concepts.
2. Describe an ontology as a package of concepts.
In other words concept packages should be able to function independent
of attachment to ontology. And there needs a function to translate
between them. Ontology is already too low level.
It is impossible to characterise what people may be able to agree upon
as concept packages - data aims.
What people agree on depends on all the mixes of any human situation.
Is there a base strata of factors, a common field. I don't know but
I'm sure work has been done in the area. At simplest this is relation
between beliefs, hopes and desires which can never fully be known and
intersect in some group such that an agreed model can be made.
Models aspire to this. Groups create rules to facilitate this.
This is the responsibility the semweb has.
1. To identify such means of modelling and
2. mediate (show what it takes; what it is like to mediate) the
movement between model and some norms.
Here I mean behavioural norms. (So they need to be established case by
case. WebId to prevent unfriendly crawlers is a good simple example)
Not logical rules.
It is only with this in mind that anything of interest can be created.
Note: this is not creating something in the Bazaar of random market
forces. And, as with all heavily patterned behaviour, this is very
expensive in effort. It is also without the background data generation
of google as we traverse their graph. No gleaning off users. Radically
different.

Best

Adam

On 17/06/2011, Henry Story <henry.story@bblfish.net> wrote:
>
> On 17 Jun 2011, at 19:27, adasal wrote:
>
>> That said the hacker is a various beast,
>
> Indeed, hackers are not angels. But the people on this list should get back
> to hacking or work together with open source projects to get initial minimal
> working pieces embedded there. WebID is one; foaf is another, pingback,
> access control, ...
> Get the really simple pieces working.
>
>> and I wonder if this sort of thing can really be addressed without
>> overarching political/ethical/idealogical concerns. It's tough.
>
> It all fits together really nicely. I gave a talk on the philosophy of the
> Social Web if you are interested.
>  http://www.slideshare.net/bblfish/philosophy-and-the-social-web-5583083
>
> Hackers tend to be engineers with a political attitude, so they are more
> receptive to the bigger picture. But solving the big picture problem should
> have an easy entry cost if we want to get it going.
>
> I talked to the BBC but they have limited themselves to what they will do in
> the Social Web space as far as profile hosting goes. Again, I'd start small.
> Facebook started in universities not that long ago.
>
> Henry
>
>
> Social Web Architect
> http://bblfish.net/
>
>

-- 
Sent from my mobile device

Received on Wednesday, 22 June 2011 12:40:33 UTC