Re: Using "Punning" to Answer httpRange-14

On 5/14/12 8:15 AM, Michiel de Jong wrote:
> On Mon, May 14, 2012 at 12:53 PM, Kingsley Idehen
> <kidehen@openlinksw.com>  wrote:
>> On 5/14/12 6:20 AM, Michiel de Jong wrote:
>>> no. if a vocabulary has not already thought about which one of the 4
>>> options a certain property means, then it was broken.
>>
>> The Web is Broken. The Web is Alive. That's why it works. You can "sense" or
>> "perceive" via different "context lenses".
> [...]
>> Yes, via your "context lenses" it closes the HttpRange-14 discussion, what
>> about the "context lenses" of others? Look, The Web has many aspects to it,
>> and the key is to make these aspects manifest unobtrusively.
>>
>> The Web doesn't work because a specific vocabulary has been *knowingly*
>> adopted. It works because the architecture is dexterous and accommodating to
>> different world views.
> okay, so let's see what happens if punning people, 303 people, and
> hash uri rule people all write both information providers and
> information consumers.
>
> abbreviating 'subject-content, object-sense' as 'cs', and likewise for
> cs, sc, and ss.
>
> for the punning people, each vocabulary would choose 1 of 4 types (cc,
> cs, sc, ss) for each link relation. They will build both their servers
> and clients with this assumption.
>
> For the 303 people, their information servers will always give a 303
> first if you follow a URL that's used in a cs or ss link. And for the
> hash uri rule people, the URIs they put into cs or ss links will
> always have a # in them. Neither of these practices break the other
> two systems, so that's cool, we're all still compatible then.
>
> Now the 303 and hash uri rule people build clients. their clients will
> expect to find a 303s, resp. URLs with hashes, for all cs and ss URIs.
> both will know that if they encounter information servers from the
> other 'world views', this will not be true. so they might throw a
> warning saying 'warning: document is not hash-uri-rule compliant', or
> 'warning: expected 303, got document'. but their clients would
> probably just deal with these warnings.
>
> so i agree totally with you that consumers of data should be aware
> that different providers of data use different systems, and that's how
> the web can be 'broken and alive' as you say.
>
> but so far we assumed that the vocabulary specs were written by
> punning people. what happens when a 303 person writes a vocabulary?
> They'll say for instance:
>
> Where indicated, this vocabulary relies on 303s to determine whether a
> 'c-' relation is cs or cc.

There are two types of 303 people:

1. Those that wander into the realm
2. Those that have opted to support non # URIs using this pattern.

#1 could care less since they haven't even bought into the concept of 
semantic fidelity via structured content bearing relational property 
graphs.

#2 this has a subgroup (strangely enough) comprised of RDF and RDF based 
Linked Data folks. The RDF folks just use URIs and assume they are 
ambiguous. They don't necessarily factor in the Web, strange but true. 
As for the RDF based Linked Data folks, one assumes creation and 
publication of vocabularly, in this case, they will disambiguate URIs 
i.e., actually give one HTTP URI based Name to the Web Resource that 
delivers  "sense of" license via content and another that actually 
identifies/names the license.

As I've said, this whole HttpRange-14 imbroglio is so back to front its 
untrue. Sadly, its continues to turn what should be a computer science 
continuum re. distributed data objects (decoupled from language specific 
methods i.e., data members and methods are distinct) in artificially 
alien territory. Instead of folks developing useful apps, there is 
always this darned ticking bomb lurking in the background to inject 
frictions and eventual inertia.


>
> And then for 'subject' they would say "Note: unless the URI given
> yields a 303, the default assumption of this vocabulary is that the
> current document is secondary literature about another document."
>
> Likewise for hash-uri-rule people.
>
>
> It gets a bit awkward if vocabulary authors don't state which world
> view they belong to. then we probably end up asking on stackoverflow
> "does the potplants vocab assume 303s?" and then people would look at
> who wrote it, and maybe ask the author and post the answer. i mean,
> there would probably be a way to find out when a vocab author intends
> implementors to rely on 303s for disambiguation of the vocab spec.
>
> so yes, i am aware that there are, and will be people who do not
> accept punning as the solution, and will instead continue to rely on
> 303s and the hash-uri-rule in their information servers, information
> consumers, and vocabulary specs.
>
> so i'm not saying we should stop these people from doing that. we
> should make our clients interoperate with them, surely.

Yes, this is fundamentally about accepting another option besides 303. 
The problem is that is making the case for an option there remains 
underlying disregard for why the 303 solution exists in the first place. 
The WWW has delivered billions of identifiers on a platter. Why not find 
a way to incorporate them into other Web usage dimensions (e.g., Linked 
Data) without placing any burden on those unaware of said dimension?

RDF has introduced too much confusion into the journey from a Web of 
Linked Document Objects to a Web of Linked Data Objects. Betweebn both 
realms we have this grey area that's all about structured data and a 
modicum (if even that) of perceivable relation semantics. Said grey area 
(catered for by 303)  is critical foundation for Linked Data and the 
eventual high semantic fidelity relations of the Semantic Web.
> and they
> should make their clients interoperate with our specs and information
> servers. that way everybody can be happy and the web can be 'alive'.

Making our clients implement the punning solution took approx. one 
second since it boiled down to removing comments from initial code in 
our Linked Data server. Yes, we implemented that from the get-go and 
then TimBL explained the problems re. Web interop, we groked and 
appreciated his point, commented out our code, and got on with our work.

Unobtrusive introduction of Linked Data is the only way it can work. 
Forcing folks to use and appreciate vocabularies at the front-door will 
never work. Look at the Schema.org project from Google, they understand 
this fundamental point.

>
>


-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Received on Monday, 14 May 2012 13:12:31 UTC