Re: RDF/XML/Internet Collisons, Process (was Moving on)

-----Original Message-----
From: Simon St.Laurent <simonstl@simonstl.com>
To: Tim Berners-Lee <timbl@w3.org>; xml-uri@w3.org <xml-uri@w3.org>
Date: Tuesday, May 30, 2000 10:38 AM
Subject: RDF/XML/Internet Collisons, Process (was Moving on)


>[This is a set of side issues TimBL brought up in the Moving on thread.
>They're important, but I fear that they could muddy that discussion in
>communal violence, so I've separated them.]


(A good idea ...whenever the actual topic changes. Other list users take
note.)

>>What do we do, for example, when the RDF group has a commitment from the
XML
>>community, and then in a public list does not feel any responsability to
>>uphold that?
>
>This sounds like you're feeling hurt.  Could you explain what that
>'commitment from the XML community' means - I don't think the XML community
>was ever asked per se to commit to RDF, and I don't quite understand your
>complaint.


Yes, I suppose feel hurt when I and a team of people who
have been paid to do it work hard to bring people together to form
plans for the future which all can agree on, and somehere down the road, our
attempts to
hold ongoing activity together by pointing out the ties which bind it
are referred to as "random"!  Often, the w3C staff are in a very difficult
position when groups seem be very happy to work independently
and prodcue something which in the end didn't fit together.  Then,
the team's job is to point these things out.

Let me give you a filtered history to illustrate the specific point.
The RDF group were asked to use XML as a serialization language
to promote consitency and re-use. They did though many would have
preferred S-expressions.
The RDF design needed its building blocks to be first class objects, which
then translated into a need for element types to be first class objects.
This feature was seen by XML folks as being an imporant general tool
and so was - like atomic datatypes - something the RDF people
were happy to be generalized into the XML layer. So the XML folks
did it and the result was Namespaces.  It was weird in parts, but it was
accepted as it seemd to do the trick, even though some say RDF's use
of it looks weird.  Now when later there is a suggestion that  actually
namespaces (and therefore XML elements, and therefore RDF
properties) are not first class objects, then the connection is under
threat.
Within the W3C system in principle
there are coordination groups and charters and inter-group dependencies
which
you can fall back on and which sometimes aren't done very well
but in principle show that there is a commitment from all involved
to work together.  It is still difficult within the consortium structure
which was set up specifically by people who wanted to solve this problem.

When someone on an open list attacks the ties which bind the system
together,
it may be a time for feedback and it maybe a time for education.
It may be a time for more complex process. But it is a time to respect
other parts of the system.

[[[[[
>I've attempted to address this issue repeatedly through discussions of
>layering, but you seem to find that inadequate for reasons I can't fathom.


I think we are probably in vioilent agreement.  So long as one layer's
notion of identity is a subset of the other's then they can communicate,
and layering makes sense.
The only problem which some people fail to see but which there are countless
examples of is the case in which there is formally *no* relationship between
identity
in one layer and another. ("foo"and "./foo" in one document, and "foo" in
dcouments with different  base URIs). This breaks.
(The problem is, it doesn't break in most cases now because the breakage
cases
are no things people do in these early stages of the use of XML,)
If you don't believe it I can go over it again. or you could try building an
application on top of
a DOM which uses a different form of identiy to the application.).
This problem only arises with relative URIs which I think we will have to
discourage for Namespaces until XML 2.0.  If you don't agree that there
is a problem with allowing relative URIs but comparing them eitehr as
strings
ro as absolutizing depending on your "level"then please continue this under
a diffrent subject line!

]]]]

>In general, the W3C might do well to 'sell' RDF more strongly, rather than
>hoping the larger XML and Web communities will develop interest on their
>own.  That might mean reconsidering RDF and making it more approachable,
>among other possibilities.


I agree RDF needs better explanations and materials. As always it is
really diffucult to knkow how to spend limited resources.
I am not sure we don't *need* to sell everyone on RDF.  Many projects are
now getting on board. There is, true,  a danger that new peojects will miss
RDF
and end up with a messy data model as a result, but it quite easy
to use XSL to suck RDF out of XML documents which have a good
underlying model. Most people involved in RDF are developing it and
it takes tome to evangelize something.  I think an RDF primer would be
useful.
Also, like XML, there are calls for a simplification down to the bare
minimum
for a core - but a lot of people jsut building code on top of what is there.

>RDF's core community is still quite small, and while it overlaps with the
>XML community, there are many members of the wider XML implementation
>community who have never even heard of RDF, much less attempted to read the
>specifications or develop software.


Is that a long term problem? Maybe many people sould wait until there are
more
tools, and applications with RDF support built in.  The mainstream long term
data and metadata storage folks such as the libraries are on board but they
have
been for a long time.

One neat hting about using URIs as the flexibility point is that the XML
community
does *not* have to commit to or understand RDF. Once Namespaces are first
class
objects then the URI concept is the main thing connecting the two, so apart
from
sharing the "first class object" idea XML and RDF can be developed in
parallel.
This is much more practical and scalable than requiring them to be in
lockstep.


>>In the IETF, there was (and still may be) very strong peer pressure not to
>>break other systems, and very strong peer pressure to stick to the
>>principles of design on which the Internet had been
>>built.   The web now spans two very strong cultures: the Internet culture,
>>and the SG/X-ML culture,each with their own set of techncial mores,
>>vocabularies, etc. Also, the web has added
>>a few more maxims to its own culture.
>
>I'm not sure that the SGML/XML community and 'Internet culture' are that
>different, though they bring different sets of tools to the table.  I've
>suggested repeatedly that both communities need to accept the likelihood of
>change as these technologies cross, and that the changes may reach the
>core.  Resistance to change appears to be a similarity across both of these
>groups, though the XML community strikes me as a bit more fluid, perhaps
>because it's already an intersection.


XML was created after URIs and SGML to build on the best of everything
we knew.  You would expect it to be a  intersection.  You would expect it
to negotiate departures from existing architecture before it started, maybe.
;-)
The resistence to change is clearly a subjective judgement.
The notion of the URI space as a universal namespace is pretty basic- the
most basic tenet of the web.  It is like the decentralziation to the
Internet,
and modularity to software.  What are the equivalents from the markup
community?  (This is a real question - new thread please or private mail
but I am interested).  The separation of form and content is one.
Name two more.

The flexibility of the structure appears different depending on
whether you grab a decorative attachment or a fundamental foundational
member.
When you say that the recursive self-describing definition of a docucument
is going to use any other mecahnism that URIs is to me like RDF using XML
without balancing
the angle brackets.

>Namespaces seem to have felt like the Internet culture inflicting its
>toolset on the SGML/XML community (and a partial replacement for
>Architectural Forms),

What do you mean by "toolset"?
URIs?   The concept of URIs is very fundamental, and
yes each scheme is a toolset if you like, but from a very diverse range
of backgrounds.

Architectural forms are something which has bever been sold to
a wider community as far as I can tell.

>at least to a certain part of the community, while
>the IETF-XML-MIME types list has battled repeatedly over how well or poorly
>XML fits into the existing MIME infrastructure.  These are very difficult
>issues to resolve peaceably, though some kind of resolution is necessary.


Agreed.

>>With authority comes responsability.  One cannot give public
accountability
>>of a form which allows a random veto by those only interested in part of
the
>>system, and "alergic" to other parts (as you put it).
>
>A good issue to address, though the way you phrase it displays your own
>biases vividly:
>>One cannot give public accountability
>>of a form which allows a random veto by those only interested in part of
the
>>system, and "alergic" to other parts (as you put it).
>
>If this were to reflect my own perspective, it might have read:
>|One cannot give public accountability of a form which allows a
>|random veto by those individuals and corporations who happen
>|to be in positions of control, at the expense of communities
>|that have spent considerable effort developing best practices.


Ok, I can compare those two paragraphs carefully.  Under the process,
the individuals and corporations and other organziations which
are involved in W3C work within a process so that their actions are
not random.  For example, when the staff analyse the working group
response to last call comments there is a commitment that the document
should only go though if they have been addressed appropriately.
So the result of the review cannot be random.  There is no veto.


By contrast, on a public list, there is no commitment to any process
so that people can be free to criticize for their own puposes.
I am not criticizing people here, I am comparing processes.
I am interested in and have spent a lot of time in both.

You say "in posistion of control".  What is control? Who has control?
Many people on this list write code, and that controls what actually is.
That is control, maybe.  The appearance of power which the consortium
has stems from the fact that people in it consort - they agree to work
together.
They participate and commit things. Time - that they will reveiw other
people's
specifications. Money - to provide an infrastructure and a full-time staff
to help
with the vary challenging business of collaboration and consensus across
cultures.
They also make a commitment that they are working toward an interoperable
solution across the board.  The result has prodcued some specs.
No one has to use them, but because a lot of people put a lot of time
in, and because the way they fit together has been the matter of some
thought,
they are generally considered useful.


>Reconciling those two will be difficult, of course.  Balancing top-down
>directed development with organically unmanaged bottom-up development is
>rarely a fun job.


Yes.  Some weblike mixture is what works in practice.  It is getting the
balance.
And for each person realizeing where they fit in.

>>I think some form
>>of public issues list with incoming issues automatically logged, and
>>disposition including hypertext links to the resolution, may be a good
>>interface between a working group and a wider public interest group.
>
>Sounds like a good idea, one I'd like to see the W3C put in practice across
>all of its activities.  That'd help, certainly.


We have had the plan on the table for a long time.  <sigh/> We wanted a
moderated
state-tracking system for architecture discussions. But in fact the tools
are not
things you can get off the shelf and we don't have unlimited resources to
spend on creating them.

>Simon St.Laurent


Tim Berners-Lee

Received on Wednesday, 31 May 2000 13:02:47 UTC