- From: Sam Hunting <sam_hunting@yahoo.com>
- Date: Sat, 3 Jun 2000 14:00:15 -0700 (PDT)
- To: Tim Berners-Lee <timbl@w3.org>, Dan Connolly <connolly@w3.org>
- Cc: xml-uri@w3.org, "Simon St.Laurent" <simonstl@simonstl.com>
[simon st-laurent writes] From the outside of the black box, there appears to be an enormous amount of randomness inside the black box. The view on the inside may well be different. We simply have no way of knowing, and being told that documents published as NOTEs have 'axiomatic' status makes life even more confusing. [tim berners-lee responds] (Something can be axiomatic in the design without being published at all!) > >[Sam Hunting wrote] Debater's points aside, the picture of a vendor consortium leading "the Web to its "full potential" (TBL's personal architecture document) on the basis of secret (or at least unpublished) "axioms" gives me the chills. [ tim berners-lee writes] > Sam, give me a break. ;-) Where would you like it? ;-) [ tim berners-lee writes] > I was arging by extreme example that publication > status of a document and the logical status of the contents in the > design are not necessarily matched. Recognizing that the example was extreme, I classified it as "a debater's point". Nevertheless, limit cases are often revealing. If the logical status of the contents of a document is a (secret) superset of the (public) contents of the document, then those readers of the document who rely only on the public contents are bound to go astray in their interpretation and their implementation. > In this case, the architecure document was always totally public. > The http://DesignIssues notes (such as I could get time to write them It's very hard to know, for an outsider/newcomer to the process, to know how to weigh documents that are to be found on the web. (To give them some sort of semantic multi-dimensional "precedence", if you will.) Suppose I have terabytes of content that I wish to archive in some reasonably standard, portable way, that I also wish to access or display with cheap or free tools, and that I don't want to "churn" my data a lot -- ie, I wish to be able to assume that my documents to not break with every revision. XML springs at once to mind. (This could be labelled the "Desperate Content Provider" problem, as opposed to the "Desperate Perl Hacker" problem ;-) But how to use XML, exactly? Naively, I look to a published W3C Recommendation, assuming that it would always "trump" a Working Draft, and assuming also that it would "trump" a personal document (no matter that the personal document was written by someone very distinguished). But as it turns out, that's not sufficient. As a Desperate Content Provider, I not only have to understand a published specification, I have to look at a personal document, I have to understand what is "axiomatic" in notes, and who knows what else? If this is so, if the logical design of the specifications can differ so much from their actual content, then why were these documents called Recommendations in the first place? Perhaps the "axioms" were not "secret" -- but they were certainly hidden in plain sight! Since I must now leave this site, I cannot respond to your very fair-minded comments on Internet history and top-down design, to my great regret. To summarize, I'm just not sure that process by which internet protocols were developed will meet the "moral content" I believe that documents have. I don't have an answer for a better process. I'm glad to that a more open process is being contemplated. S. ===== <? "To imagine a language is to imagine a form of life." -- Ludwig Wittgenstein, Philosophical Investigations ?> __________________________________________________ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com
Received on Saturday, 3 June 2000 17:00:50 UTC