W3C home > Mailing lists > Public > www-tag@w3.org > August 2012

Re: Web Architecture

From: Melvin Carvalho <melvincarvalho@gmail.com>
Date: Mon, 27 Aug 2012 16:36:55 +0200
Message-ID: <CAKaEYhKLs2CKV8i89xtzFDJ5SnU3jD2usiPK9YDxaRu7pnz59Q@mail.gmail.com>
To: BillClare3@aol.com
Cc: nrm@arcanedomain.com, www-tag@w3.org, timbl@w3.org, ashok.malhotra@oracle.com, ossi@w3.org, ossi.nykanen@tut.fi, ht@inf.ed.ac.uk, masinter@adobe.com, ylafon@w3.org, jeni@jenitennison.com, robin@berjon.com, jeff@w3.org
On 27 August 2012 16:17, <BillClare3@aol.com> wrote:

> **
> Noah,
>
>
> Your comments raise interesting questions at the heart of my proposal.  An
> architecture for the Web needs to be based on a clear notion of just what
> the Web is – and that may not be as easy to state as it sounds.  For
> instance, Wikipedia defines the Web as “a system<http://en.wikipedia.org/wiki/Information_system>of interlinked
> hypertext <http://en.wikipedia.org/wiki/Hypertext> documents accessed via
> the Internet <http://en.wikipedia.org/wiki/Internet>”.  More on this at
> the end of the note.
>
> *
> In a message dated 8/26/2012 8:21:56 P.M. Eastern Daylight Time,
> nrm@arcanedomain.com writes:
>
> Bill,
>
> Thank you so much for your contribution. Let's see whether some of the
> TAG's members have comments on this. I have only had time for a quick
> skim:
> based on that, there are at least a few high level concerns that occur to
> me:
>
> * The Web is, obviously, a system that is already deployed on a massive
> scale. The AWWW document written in 2004 is not an attempt to craft an
> ideal architecture for a global information system in the abstract,
> although it does make some effort to identify general principles. Rather
> it
> attempts to document the architecture of the Web as it is now, and as it
> may evolve incrementally from the base we have. I don't see in your draft
> a
> lot of connection to the existing core mechanisms of the Web, such as
> URIs,
> HTTP, etc.
>
> *
>
> Although HTTP and URI are indeed core mechanisms, the question here is, at
> the risk of great heresy, are they essential to a Web architecture.  Integration
> of data and services from many sources is a useful goal and I’m not sure
> that a Web architecture wants to be exclusive.  As examples, XML is not
> just a markup language anymore and JSON is often a useful alternative.
>
> *
> * Two key concerns in the design of the Web is scalability and
> discoverability, I.e. the ability for users and software to dynamically
> explore the Web by following links. As described in the Self-Describing
> Web
> [1], the system is architected to ensure that, with knowledge of the URI
> specification and the specifications to which it transitively refers,
> clients can correctly interact with resources and correctly interpret the
> responses they receive (or else discover reliably that they are not
> prepared for such correct interpretation.) It's not immediately obvious to
> me how your proposal addresses such concerns.*
>
>  Scalability is a concern that can be addressed in many ways.  I suppose
> an architecture might preclude it but I’m not sure how a general
> architecture would explicitly provide for it.  A particular architectural
> approach such as “map/reduce” or “nothing shared” can foster it, where
> applicable.
>
> Discovery is also fundamental and the approach is to integrate existing
> capabilities such as RDF and WSDL- and even to generalize them where
> feasible.  For instance, WSDL is largely a particular instance of the
> concept of a communications protocol, which is an instance of an interface.
> The concept of interface  might also define preconditions, exception
> handling, and other properties.
>
> *
>
> Thank you again for offering this proposal.*
>
>      So to the original question - is the Web to be defined in terms of a
> set of protocols from which it has evolved, or are their useful
> abstractions of these protocols that can be exploited.  That of course
> leaves an onus for an alternative definition, which the proposed framework
> suggests; i.e. a structured set of interfaces for data, services and
> applications that communicate with each other and with a firm foundation in
> capabilities from XML and related standards.  And this a large part of
> the alternative value of such an approach – and, yes, of its challenge.
>

Thanks for sharing your document.  I enjoyed reading it, and it poses some
questions that I've thought about for for some time.

Regarding abstractions, the best that I know of are "Design Issues of the
World Wide Web":

http://www.w3.org/DesignIssues/

And the print book, "Weaving The Web".

In particular the "Principle of Least Power" may be relevant here?

http://www.w3.org/DesignIssues/Principles.html
Principle of Least Power

In choosing computer languages, there are classes of program which range
from the plainly descriptive (such as Dublin Core metadata, or the content
of most databases, or HTML) though logical languages of limited power (such
as access control lists, or *conneg* content negotiation) which include
limited propositional logic, though declarative languages which verge on
the Turing Complete (Postscript is, but PDF isn't, I am told) through those
which are in fact Turing Complete though one is led not to use them that
way (XSLT, SQL) to those which are unashamedly procedural (Java, C).

The choice of language is a common design choice. The low power end of the
scale is typically simpler to design, implement and use, but the high power
end of the scale has all the attraction of being an open-ended hook into
which anything can be placed: a door to uses bounded only by the
imagination of the programmer.

Computer Science in the 1960s to 80s spent a lot of effort making languages
which were as powerful as possible. Nowadays we have to appreciate the
reasons for picking not the most powerful solution but the least powerful.
The reason for this is that the less powerful the language, the more you
can do with the data stored in that language. If you write it in a simple
declarative from, anyone can write a program to analyze it in many ways.
The Semantic Web is an attempt, largely, to map large quantities of
existing data onto a common language so that the data can be analyzed in
ways never dreamed of by its creators. If, for example, a web page with
weather data has RDF describing that data, a user can retrieve it as a
table, perhaps average it, plot it, deduce things from it in combination
with other information. At the other end of the scale is the weather
information portrayed by the cunning Java applet. While this might allow a
very cool user interface, it cannot be analyzed at all. The search engine
finding the page will have no idea of what the data is or what it is about.
This the only way to find out what a Java applet means is to set it running
in front of a person.

I hope that is a good enough explanation of this principle. There are
millions of examples of the choice. I chose HTML not to be a programming
language because I wanted different programs to do different things with
it: present it differently, extract tables of contents, index it, and so
on.



> *
>
>
> Noah Mendelsohn
> Chair: W3C Technical Architecture Group
>
> On 8/26/2012 6:59 PM, BillClare3@aol.com wrote:
> > W3C TAG members,
> >
> >    It seems typical that over time architecture groups start with broad
> > visions and then tend to become focused on more and more narrow issues.So
> > with all the accelerating innovations spawned by the Web is it time for a
> > revisiting of a broad vision ?
> >
> >     I suspect many might wish to answer yes to that question, but would
> > question how to proceed.Attached is a short paper that was written for a
>
> > slightly different purpose, but which tries to address the question from
> > perspectives of XML language and standards, from that of models and
> > frameworks, from that of integration of resources, data, services and
> uses,
> > and ultimately from the perspective of applications.In particular, the
>
> > paper focuses directly on basic architecture principles, identified by
> the
> > W3C, as orthogonality, extensibility, error handling and
> > interoperability.In addition it attempts at a basis for completeness as a
>
> > framework for applications and their development. It is much simpler in
> > content and at a higher level than the W3C recommendation on Web
> > architecture from 2004 – and perhaps that is a good thing.
> >
> >     So is it time for new foundations?And is this feasible ?
>
> >
> >     Perhaps these notes can useful for stimulating discussion within the
> > group and might be useful for soliciting formal sponsorship.
> >
> > Thanks for your consideration.
> >
> >
> > *         Bill Clare**
> > *
> >*
>
>
Received on Monday, 27 August 2012 14:37:26 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:47 UTC