- From: Stephen D. Williams <sdw@lig.net>
- Date: Tue, 20 Jun 1995 19:34:06 -0400 (EDT)
- To: hardie@merlot.arc.nasa.gov (Ted Hardie)
- Cc: brian@organic.com, peterd@bunyip.com, rating@junction.net, www-talk@www10.w3.org, uri@bunyip.com
>
...
> obstructionism. We are, at the base, discussing how the content of
> the Web will be understood and how we will interact with it in the
> future; these are concerns as important as they are immediate.
Exactly. We have an opportunity to add value to existing and future
data via metadata.
> For those who believe that there is an immediate need to
> establish a reasonable, voluntary labelling scheme, Martijn Koster and
> Ronald Daniel have made very cogent arguments about the need to keep
> access control, labelling, and subject description apart. Ronald
I agree, although I don't think the mechanism should be separate.
Only the values of the metadata and interpretation should differ.
> Daniel's discussion of URC indicates a working group which is dealing
> with this issue in a very cogent and complete manner. For those who
> need something more immediately, Martijn's work with Robot exclusion
> may provide a workable, ready-to-hand solution. According to the
> Robot exclusion standard, robots check for a file called "robots.txt"
> at any site they traverse; it lists user agent, then the partial URL
My take is that this could immediately take the form of .meta files.
/.meta for sitewide, /.../.meta for subtrees, and /.../filename.meta
for file specific. With the additional capability of embedding the
metadata as tags in formats amenable to it (ie. HTML). The .meta
files can be retrieved as simple files or automatically added to
MIME headers, etc. The use of the metadata can be multifaceted.
Only general arenas and syntaxes of metadata need to be created now,
and if done carefully will be extendable to adopt all the current and
future projects relating to metadata. (URC, OCLC, WWW.org, etc.)
...
> Ultimately, of course, it's a hack, and it would need to be
> replaced. Several interesting methods for replacing it have been
> discussed; I believe that the URC method with SOAPs covers the ground
> very well. I have also request to beta test the "Silk" URA, and I
> encourage others interested in this issue to do the same; if it can
Likewise...
I think that the important points are defining the metadata format
(meta meta data: <meta AUTHORITY FORMAT ...>), immediate methods of
access to metadata if it can be done in an upward compatible way
(naming conventions, proxy servers, trivially modified clients), and
persuit of very complete models of metadata, etc. access
(URN/URC/URA).
...
> which don't interest them. We should not be choosing for them what
> might interest them or what they should ignore, and interim solutions
> we may have that do that should make clear that even the author's
> opinion of a work should not be its sole reference for appropriate
> audience.
>
> Regards,
> Ted Hardie
> NASA NAIC
I don't believe that many have seen my earlier proposals, so I'll repost
here:
Return-Path: <sdw>
Subject: Re: New Internet Draft on protecting children AND free speech
To: droelke@spirit.aud.alcatel.com (Daniel R. Oelke)
Date: Fri, 9 Jun 1995 01:03:39 -0400 (EDT)
Cc: caci@media.mit.edu, www-talk@www10.w3.org, rating@junction.net,
dnew@sgf.fv.com, nsb@nsb.fv.com
> First all - I *have* read, digested, and thought about the KidCode
> proposal. Here are some points to ponder, both pro and con.
Likewise...
> 1) This is not a "bad" idea per say, but there are some
> realities that need to be applied to the idea of adding
> "KidCode" into the name of everything.
...
> 2) The rating authority section just touches on the idea.
...
> 3) It might get the U.S. government off the idea of needing
> censorship laws, but I doubt it. These are politicians who
...
Let me try to clarify the current proposed solutions, IMHO, so that we
can talk about pros and cons more clearly:
Index: Goals - Meta Info - Retrieval Method - Misc - One Design
Goals:
Prevent censorship (ie. via government laws, inuendo, etc.)
Lower risk to providers of access and information
Increase quality of information available
Insure longevity of openness of Internet communication
Make Internet access more 'safe' and controllable for minors
Meta Information to be added:
Meta objects that contain implicitly or explicitly:
Authority, Viewpoint, indexes/content/attributes(ICA), ICA type
ICA info can be divided pretty broadly into factual info
and opinion/ratings with a fuzzy intersection.
(Librarians believe for instance that they can index
and categorize in an objective matter.)
Authority = Author, Other (commercial, group, gov's, schools)
Viewpoint = Author, US, Iran, High School, etc. (estimated
from the expertise of the Authority; ie. I would be
guessing about suitability for Iran with my current
hear-say knowledge. One of the attributes could
indicate veracity.)
Attributes (aka: content, indexes, ratings, etc.) = any kind
of system of meta information. Objective, factual
attributes being clearly defined measures (body count,
number of erogenous zones, mention of electron
microscopes). Subjective, opinion information (ie.
ratings) being suitability for ages, status,
societies, etc.
ICA Type = Attribute format/standard indicator. Could be
KidCode, URC's, LC, CC, etc.
There are lots of possible policy, usage, filtering,
deployment, etc. issues that are relavent.
Retrieval Method:
Technical details by which meta information and filtering
specs are fed to a meta-info comparator and a decision is made
about what action to take by a browser (or browser proxy).
Designs on the table (that I noticed):
Document Tags (html) (META or CONTENT tag, URC's, etc.)
Header Tags (rfc822/news version of Document Tags)
Mime Tags (either as header or as multi-part document doc)
Transaction (Radius?, html) server for retrieval clearing.
Note that this could include hierarchies a la DNS,
'feeds' a la Netnews(Usenet), subscription a la
mailling lists, etc. Could be positive and/or
negative.
Server subtree wide meta retrieval
Caching proxy with cached 'ratings' using any other method.
(Note that this allows implementation now without
changing browser. Also, for ppp, slip, etc. means
that retrictions couldn't be bypassed easily.)
URL encoding (KidCode.1)
URL Forms encoding (KidCode.2)
Misc:
Multiple methods and attribute standards will need to be
supported. Multiple simultaneous meta-information sources
needs to be supported by the browser/browser proxy.
Existing and emerging meta-information standards should
be adopted asap. (I have a growing list now.)
Mapping needs to be provided so that factual information can
be converted to ratings and so that minimal filtering
selections can be matched against multiple authorities and
attribute typings.
This effort should be used to allow better ways to find
info along with filtering out unwanted info.
All types of communication should be covered: all IP,
telnet, etc. (This is mostly said by mentioning URL's.)
Only the frame work needs to be standardized, although it
would be beneficial to have at least one sanctioned
attribute typing standard to start using for immediate
purposes.
News, as an example, could be done as Headers by Author, group
messages (similar to a control message in my original scheme)
and ratings servers by third parties, and groupwide default by
charter organizers.
My favorite deployment path and meta standard is:
Establish simple meta URL/www/html conventions (*.meta, plus
directory .meta and system //.meta data) for retrieval of meta
information pertaining to a document. Something easily cachable.
Assemble current meta-info standards and either extend or create
only what's missing.
Create one or more ratings server methods (http and radius possibly).
Modify www proxy/caching servers (cern_httpd, etc.). The advantage
to this is a) modified clients aren't required, b) covers multiple
protocols, c) when used as the only method to get through a firewall
is effective in preventing client side workarounds like reinstalling.
Classification of type of account/filtering profile could be done
based on both static information about the account, ip address, etc.
and also dynamically based on typing in a password to a
current-session adult access form.
Regarding c): I would be very upset if this were used to censor adults
and an proposing this only for the case where an ISP is selling
ppp/slip access to a minor or a family that wishes to ensure
restrictions on their account.
Then modify clients to allow personal filtering.
Comments Requested.
sdw
--
Stephen D. Williams 25Feb1965 VW,OH (FBI ID) sdw@lig.net http://www.lig.net/sdw
Consultant, Vienna,VA Mar95- 703-918-1491W 43392 Wayside Cir.,Ashburn, VA 22011
OO/Unix/Comm/NN ICBM/GPS: 39 02 37N, 77 29 16W home, 38 54 04N, 77 15 56W
Pres.: Concinnous Consulting,Inc.;SDW Systems;Local Internet Gateway Co.;28May95
Received on Tuesday, 20 June 1995 18:48:08 UTC