- From: Tim Bray <tbray@textuality.com>
- Date: Fri, 24 Oct 2003 16:25:35 -0700
- To: Michael Champion <mc@xegesis.org>
- Cc: www-tag@w3.org
On Friday, October 24, 2003, at 08:11 AM, Michael Champion wrote: > But I still find the proposed draft text -- > "The general success of Web software is evidence that interoperability > in networked information systems is best achieved by specifying > interfaces at the level of concrete syntax rather than abstract data > models or APIs." -- > quite overstated. Maybe it would help to be more concrete about what > I fear that it recommends and rejects by implication. I wonder if you're paying close enough attention to the phrase "in networked information systems". When I'm talking from Antarctica's Visual Net server to Apache's memory-management code, or from the Ongoing typesetting system to an image manipulation facility, I want APIs and data models. When I'm talking from Antarctica's Visual Net server to a smart client or from Ongoing's editing system to its publication system (which often are on two different computers) I want to interoperate based on XML syntax. > Looking back, it suggest that one should look mainly at the syntax, > and not the implied data model and processing model of HTTP and HTML > in accounting for the success of the Web. I do so suggest. I appeal to Occam's razor. > In my understanding, the "textuality" (grin) of HTML was indeed a > necessary condition for the Web's success, but it also required a > shared conceptual model of what a Web page "is" to succeed in the > fashion it actually did. That is, to the best of my knowledge (having > never actually looked at Web browser code), the way a real browser > makes sense of "tag soup" is to try to fit it into an abstract or > concrete data model of a Web page. Sure, but the abstract model used by a robot harvesting links is entirely different. The abstract model used by the search indexer behind the robot, doing natural language processing and token extraction, is different again. (I know, I've written these things). All this just works because the interchange is defined at the level of syntax. The notion that you could define one data model that would meet the needs of browsers and robots and indexers, and of all the other yet-to-be-invented Web software, is silly on the face of it. > Looking ahead, I fear that this implies that XQuery will not be seen > by the TAG as a viable platform for interoperability over the Web, and > that IMHO is the whole POINT of XPath and XQuery in many real world > situations. I agree with everything you say about XQuery, but I think that XQuery would be tremendously useful on a standalone computer with no network connections and a lot of XML on the disks. XQuery is being developed at the W3C as an accident of history "because that's where XML stuff gets done". So I just don't think that being careful to bless XQuery and friends is really a design goal for the Webarch document. Unless we want to drop the "networked information system" verbiage in the introduction and define Web Architecture as "what the membership of the W3C is working on in A.D. 2003." > XQuery takes this further and explicitly builds on a reference data > model that is sufficiently abstract to describe data that has never > been wrapped in an angled bracked, e.g. an RDBMS table. I see this as > profoundly important to the Web, because it allows concrete syntax in > XML files, XML information in XML databases, and non-XML data in > Object-Relational databases to be processed and integrated within a > common framework over the Web. Take out the phrases "to the Web" and "on the Web" and I'm with you. Let's assume all of what you say is true, but what is Web-specific about XQuery & friends aside from the fact that it's being done at the W3C? I'm not asking rhetorically, my perception of XQuery may be incorrect. Cheers, Tim Bray http://www.tbray.org/ongoing/
Received on Friday, 24 October 2003 19:26:11 UTC