Re: draft: Requirements for Any Theory of 'Information Resource'

On Wed, Feb 16, 2011 at 9:15 PM, Nathan <nathan@webr3.org> wrote:
> Jonathan Rees wrote:
>>
>> On Wed, Feb 16, 2011 at 3:45 PM, Nathan <nathan@webr3.org> wrote:
>>>
>>> Jonathan Rees wrote:
>>>>
>>>> http://www.w3.org/2001/tag/awwsw/2011/axioms-2011-02.html
>>>>
>>>> This expands on the 'predictive metadata' thing I wrote.
>>>
>>> good write up
>>
>> Thanks for the quick turnaround!
>
> np - sorry this response took a while, been caught up with clients for a few
> hours!
>
>>> "bound to" is very weak imho, I'd swap it read:
>>>
>>>  (def) An 'information resource' is 'identified by' a URI iff every
>>>  simple IR that is 'relevant to' the URI 'is a reading of' the
>>>  information resource.
>>
>> Well, (1) "identified by" would be a mismatch to web terminology if
>> httpRange-14 were withdrawn, (2) I'm talking about only the narrow
>> situation involving dereferenceable URIs, not mailto: and 303s, and
>> (3) I avoid the term like the plague because I don't know what it
>> means. So I'll stick with "bound to" since it's less familiar in the
>> context, but will consider alternatives.  Earlier I had "accessed via"
>> but that implies a protocol, and that's an unnecessary assumption.
>
> good points
>
>> I've reworked that entire section - no more need for 'carries' or
>> 'relevant to' in the current version.
>
> cool
>
>>> the following axiom appears to be wrong, "for any set of", any set?
>>>
>>>  For any set of 'simple IRs' there exists an IR that has all of the
>>>  simple IRs as readings.
>>
>> Have made this more explicit.
>
> :) cool that clarifies
>
>>> and perhaps it would be worth swapping 'RDF graph' to 'RDF Statement' in
>>> the
>>> final axiom.
>>
>> Umm... we need to talk... the idea that it is graphs, not statements,
>> that have meaning is absolutely key to both RDF and OWL semantics. If
>> you haven't read the RDF semantics rec I recommend you go do so now -
>> several times over.
>
> don't worry I've read them (and most of the related to what we're doing docs
> many times over!), I simply got ahead of myself and was mentally thinking
> about how one could weed out false statements that made the interpretation
> of the graph wrong whenst applying these axioms (when merging graphs from
> different sources etc) - do ignore.
>
>>> finally, and apologies for this, but the set of axioms you've got there
>>> seems to perfectly fit FTP (they've actually helped me considerably to
>>> see
>>> that this view, the IR and httpRange-14 view of the web, sees it as being
>>> web of files/documents relating to exactly the way it was predominantly
>>> used
>>> back when it was all static docs that were ftp'd to servers - makes
>>> sense).
>>
>> Exactly what I'm looking for - degenerate and pathological models. If
>> they perfectly fit FTP that says I need some additional axioms.
>
> glad to hear that's what your looking for - but wasn't HTTP created by
> removing axioms from the FTP model, not adding? essentially, AIUI, HTTP
> removed the axioms your adding / outlining at the minute..

Please be more specific - which of my axioms would hold for FTP but not HTTP?
FTP is certainly part of the web so needs to be handled.

>> E.g., there exists a simple IR that has a content-type; and there
>> exists an IR that has at least two distinct readings.
>>
>> I guess the axiom I need is that the URI for some really modern,
>> exotic, pathological web page is bound to an IR.  Maybe
>> http://google.com/ ?
>
> I'm unsure tbh, in many respects something which you can GET can always be
> classed as an "information resource" (or source of information), but in many
> cases people don't use that URI to mean the information resource, they use
> it to mean the "Search Engine" or in another case "The Film" or "The Song"
> and so forth.

If you can tell me why those things mustn't be information resources
according to the axioms, or can't be bound to a URI, I'm all ears.
They would make perfect examples.

If you have some theory of "information resource" please explain it
and see whether the axioms can be translated into your theory. If some
axiom isn't really needed we need to know that.

> By all means add extra axioms, and I'm sure we can get some
> proofs in the model that they are true - but that'll just be one "view" of
> the world and may well not match reality, or be technically optimized to
> deploy.

The whole point of an axiom is that you can't prove it!  If you have a
model, then you can prove that the axioms hold in terms of that model,
but by themselves they are by definition unprovable. (except when
redundant, and I'm not aware of any redundancy.)

> I feel like I need to remind that i did violently defend and back up the
> httpRange-14 decision and IR theory for a very long time, and very publicly
> - but now I have to confess that my view has changed, and that now feel that
> names are used to refer to things, and whatever most people agree a thing
> names, is what it names - web arch and rdf simply have to accept that and
> make it work, not constrain it.

Ah - this is what I was trying to get you to fess up to - that you
reject the standards process as a way to coordinate implementors.  In
that case why on earth are you spending a single minute on W3C
business?

> However, I also respect you, and those that made the IR decision in the
> first place, so happy to go along with proving or disproving it - my own
> theory is that we can both simultaneously prove and disprove it, depending
> on which way were looking at the situation, or what we're trying to achieve,
> or what use-case we are considering.

That is absolute gibberish. We are talking about interoperability
engineering.  The community decides what agreements it wants to hold
to. If coordination is too expensive, or the people who matter can't
be engaged, no one bothers, and you just suck up and live with
incompatibility. If interoperability is an issue and there is the will
to get it, it can be obtained.

What I'm saying is that if some people do this one way, and some do it
another, then the value of these URIs plummets and some alternative
means has to be used to defend against inconsistency. When linguistic
territory gets invaded you have to either defend or abandon it. There
is a big investment in the architecture I've described, in
specification preparation, software, and documents. Changing things
means pulling the rug out from those who have invested in it. This has
to be done with respect. We can ditch it, but I want everyone to know
the cost: all generic metadata tools break.

A couple of people have asked what breaks if we ignore the
architecture. Of course nothing breaks, just as having your own
private incompatible version of HTML served up under text/html doesn't
break anything - as long as you don't care about interoperability with
the deployed one, which obviously those who want to withdraw
httpRange-14 don't.

Having different parties prove contradictory statements is the
definition of incompatibility.

What do I have to do to convince you of this?

I'm not trying to defend the quality of the architecture; in many ways
I think it's pretty silly. But I just don't see the point of this much
upheaval and I hate the attitude that you can just ignore a
principled, organized design that lots of people have worked hard to
build and exploit. If # is too hard to write, or a new set of names
defined by anarchy it needed, then let's deploy a new URI scheme or
something - it will be easier than rewriting every spec that talks
about URIs.

Jonathan

Received on Thursday, 17 February 2011 03:04:08 UTC