Re: Unified Approaches

Larry certainly hit the nail on the  head in questioning feasibility, so I 
will try to respond to that point.  I know that it is easy, for one who is  
more an observer than an active participant in these processes, to over  
generalize and to simplify, so I will try to be somewhat specific in a  
relatively short note.  And, there  may be some advantage to an “outsider’s view”. 
Noah’s response prompted me to check  his website and to find a paper he 
and Tim Berners-Lee wrote on “power” for  computer languages.  The report  
emphasizes that so called “powerful” constructs for specific purposes are not 
as  useful to computer languages as fundamentals.  This is basic to any 
approach at enhanced  XML. 
One example is a document header (“h1”  – “h6” in HTML) specification.  
This  appears simple, practical and easy to use.  However, it implicitly 
entails concepts for specification of strings,  paragraphs, nested structures, 
numbering, nested numbering, some aspects of  layout, etc., rather than 
explicitly building from these and letting them all be  used independently.  For 
instance,  since this is a specific capability rather than a general one, 
there are only 6  levels of heading defined – not that I would expect much use 
for an “h7”.  In particular though, dependent aspects  such as nested 
structuring and impact on layout are not clearly defined or  specifiable.   
On top of this, are separate  capabilities for style sheets, since the base 
language does not provide adequate  facilities for parameterization.  On  
top of that, transforms are added, apparently because the base language is  
lacking reference capabilities.  Thus capabilities are added on top, rather 
than being directly available  from foundations.  On top of this  again, are 
added tools, to express constructs which are hard to write, and  especially 
to read, in the base syntax.  Such tools again provide useful  capabilities 
for end users, but they introduce their own conflicts and, by their  very 
nature, they limit the base language functionality available.  On top of this, 
to deal with a profusion  of tools, are a profusion of tool sets.  
Secondly, I found XSD greatly  confusing, partly because I was looking for 
schema definitions for data  types.  Now, XSD can be extended for  metadata 
-  and XBRL even introduces  concepts of taxonomies to capture report 
generation parameters.  However, XSD provides for valid  specification of the 
equivalent of constructors for data objects, but not  directly for their type 
specifications.  Thus XML, which could embrace world wide data, does not have 
an adequate  type definition capability.  (Infosets do provide for some type 
capabilities, but, here again, these  are implicit rather than explicit.)  
With types come properties and methods.  One advantage is, that by 
introducing  methods, which can be provided in separate libraries, new XML 
capabilities can  be introduced without impact to and cooperation with agents and 
browsers.  Another advantage is to realize that, in  an interpretive 
environment, base specifications can be modified and extended.  For instance a data 
structure  (either as a type or object) can be extended, in a particular 
context, with  default presentation attributes.  
Angle bracket syntax is appropriate  for a data type characterized as text 
markup.  However its use for statements is  somewhat stretched.  So JSON, 
for  instance, uses a statement oriented syntax and simple data structures to  
immediately provide something clear and readable from what was previously  
difficult to scan and understand. 
Namespaces were intended to solve  ambiguities from name clashes from 
different contexts, based on the context of  the name definition.  However, it  
was realized that to create unique names, it was simply sufficient to have 
only  a unique prefix for a name.  Hence  the opportunity to develop a concept 
of  a “space of names” was lost.  More importantly, the opportunity for 
developing a concept of module was  lost. 
Compatibility of course is  fundamental.  Compatibility though  follows 
from a broader concept that all Web or other data that can be located  should 
be also be accessible (with deference to NSA), usable, and available for  
integration with applications.  
These few examples suggest enhanced  language fundamentals to deal with 
parameters, references and relationships,  tool needs, data types, syntax, 
modules and data access  – and there are a few more.  The suggested approach is 
to start with  fundamental language facilities and then to systematically 
extend them to  encompass current capabilities.  
Also the simplification sought here is quite  different from the “micro” 
efforts, which limit rather than expand  capability.   However, the  micro 
efforts are useful in that they elucidate the  fundamentals. 
So the feasibility  question can be rephrased as – Through a process of 
abstraction, generalization  and systematic integration of facilities, can a 
language, which has evolved from  documents to interactive presentation, to a 
data language, further evolve to a  highly  productive computer  application 
modeling language ?  
The ancients put it well with regard  to power.  It may be impossible to  
untie the Gordian knot, but a swift blow can sever it.  Or maybe we will have 
a Tower of  Babel. 
To return to feasibility – technical,  cost and development: 
Technical feasibility is in two  parts - building foundations and then 
extending them.  The foundations are not new concepts,  they are well understood 
if not precisely defined, and they are largely  independent.  
Specifications can be  pursued in parallel with some architectural guidance; 
implementation requires  integration with current browsers, largely at the level of 
infoset  construction.  Extension and  simplification can follow where needed.   
>From the perspective of the rapidly  decelerating pace for the acceptance 
of new capabilities, at some point a  serious look at language fundamentals 
becomes not only useful, but needed, and  then imperative.  So if not now, 
Cost feasibility depends on cost  benefit.  Cost is significant, but  
benefits are large.  Since builders  of browsers are also interested in 
supporting application development  capabilities, some of these benefits can accrue 
directly to them. 
Development feasibility is tougher and  requires both technical and 
business sponsorship.  Therein lies a need for advocacy.  


In a message dated 7/14/2011 9:31:26 P.M. Eastern Daylight Time, writes:

Sounds  overly ambitious and unlikely to be feasible. 

From:  [] On Behalf Of
Sent:  Thursday, July 14, 2011 3:16 PM
Subject: Unified  Approaches

There was a recent note about “a unified approach to structured data on the 
Web  . . .  reduce confusion in the marketplace”.  This note is a query  as 
to any interest in a much broader approach to reducing confusion.   Some of 
this has been discussed on “xml-dev”, but with little serious interest  
Clearly there is a profusion of overlapping and  confusing Web standards 
and candidates that could indeed use a unified  approach.  Not only would this 
enhance support and use of new standards  in itself, but it could also 
become the basis for a significant improvement in  technology for application 
What is proposed in a related paper is an  approach toward use of model 
driven declarative specifications that  systematically continue the development 
of XML from a document specification  language to full scale support for 
application development.  Among  several other goals, this is intended to 
simplify development by significantly  reducing the need for procedural code in 
broad classes of  applications. 
Models here are application models, enhanced with  generic system models 
for interactive presentation (ala HTML) along with  similar models for data 
access and structures, for communications and for  control frameworks.  
So - is there interest in the TAG for such  a bold endeavor?   If so, a 
detailed concept paper is  available.


Received on Sunday, 17 July 2011 15:32:02 UTC