RE: [metadataInURI-31] New editors draft for Metadata In URIs Finding

Then you have measures of reasonableness for a game and the measures
applied are related to the game, risk and reward.   I note the
introduction of "known source of" in your reply.  For a 'known source of
good weather forecasts" one assume knowledge a priori of the
play/access.  In that case, the measure of reasonableness is much less
expensive.   The representation doesn't convey that; the association
does.  I think we agree here.  The trust is within the context of
associations to other sources.  That's fine.

Trust but verify.  The web is caveat emptor because any claims-based
system is on a blind play (no context of associations (usually episodic
and pragmatic).  While it is a reasonable strategy to play 'tit for tat'
and do as done unto without massive retaliation for defection, it is
nonetheless a test-based commitment.  That one can also use other
sources of information to make it easier to make that first commitment
doesn't change the nature of the system architecture.  You make the
commitment relative to the risk: if it is a zero sum game, you assess
the loss.  If it is not a zero sum game, you assess the loss relative to
a series of plays.  If the loss or win does not change the rules, you
are in a Nash equilibrium and you assess the value of the game itself.

len


From: noah_mendelsohn@us.ibm.com [mailto:noah_mendelsohn@us.ibm.com] 

Len Bullard writes:

> It is reasonable to assume nothing until using the service.  First
> access is blind.  What the claims establish are the conditions to test
> by access (is this a weather report), and in some claims, repeated
> access (is this the best weather report).   The proof is in the using.
> 
> The metadata presents claims to be verified. The URI is agnostic to
the
> metadata claims.  It is the user that has to be reasonable through
> observation (use and memory of use).

I'm not sure I see things quite this way.  Consider the weather report 
example.  While it's true that a user or even software can often
determine 
that data is erroneous, e.g. because what came back was a stock quote 
instead of a weather report, it's atypical to be able to prove that the 
data is correct.  I may, for example, be satisfied that the information 
retrieved >appears to be< a weather forecast for the intended city, but
my 
trust that it is a correct, current and reliable forecast is likely to
be 
based in part on just the sorts of external factors mentioned in the
draft 
finding.  So, if I see the URI listed on a seemingly current billboard
for 
a known source of good weather forecasts, that contributes to my belief 
that the forecast retrieved is in fact a good one.  If I get a similar
web 
page by trying random URIs, I may note that it looks like a weather 
forecast, but I will trust it a lot less.

So, I agree that to some extent the metadata is suspect, but I don't
agree 
with the implication that the verification will come entirely from 
inspection of the retrieved representation.  It's certainly desirable
for 
representations on the Web to be self-describing, but I think that some
of 
the trust one has in the information retrieved can be based on other 
representations made by the resource authority.  These may be in the
form 
of normative specifications for its URI assignment policies, or may be 
provided less formally (and probably less reliably) in advertisements
and 
the like.


> AFAIK, there is no architectural solution to a priori trust of
> information resources.  The web is a caveat emptor system by design.
> Any claims-based system is.

Yes, but the sources of confirming information are not in all cases 
limited to what is retrieved from a GET, but may come from other 
specifications or statements that can be, with reasonable reliability, 
traced to the assignment authority. 

--------------------------------------
Noah Mendelsohn 
IBM Corporation
One Rogers Street
Cambridge, MA 02142
1-617-693-4036
--------------------------------------

Received on Wednesday, 31 May 2006 20:20:30 UTC