Re: The Rule of Least Power (UNCLASSIFIED)

Classification: UNCLASSIFIED
I would say, at best, the rule is misguided.  It is more beneficial to all audiences to always use the way of closest intention.  If a tool exists that most closely aligns with the given intention of a data facet or communication instance then this specific tool should be used regardless of whether it is more or less powerful than other ways.

I say this for two reasons:

1. Least powerful is not a well defined term and the ambiguity surrounding "least powerful" has failed the web on many occasions.
2. The end result of a given task is most important, but this accomplishment is coupled with cost limitations on how the end result is attained.

In a given economic system costs are fixed.  They can be transferred from one agent in the system to another, but the costs are not diminished from the system as a whole.  When I use the term "least powerful" to whom should this term apply?  Does it mean least powerful to write the instance, to parse the instance, to write the given parser, or even natural language understandability?  It is my experience that software tasks generally have a fixed cost.  Making a task less power for the author of a given instance generally requires a more powerful parser to take up the slack of the author's sloppiness.  In other words there is a balance to the system and when one party is biased costs are transferred disproportionally to the remaining parties in the system.  This does not hide costs, but instead reduces the obviousness of costs, which creates additional and often unnecessary challenges.

Web technologies are notoriously sloppy.  XML was created to supply a more limited, confined, and terse syntax than that allowed by SGML in order to eliminate much of this sloppiness.  In my opinion this is fantastic and certainly the way to go.  Instance authors have to try a little bit harder to reduce their errors which means parsers don't have to be nearly so complex like HTML parsers.  Costs are more balanced between given parties and as a result bugs become easier to detect.  HTML permits a higher tolerance of sloppiness, which means HTML parsers are more complex and obscure bugs may never be detected.

Out of the box XML does nothing for accessibility because it is merely a syntax without a schema while HTML hardly does any better because of its high tolerance for sloppiness.  Achieving accessibility is an additional requirement, which means an additional cost factor.  Accessibility is extremely expensive to achieve on the web with very expensive penalties even though the requirements are commonly known clearly stated.  This is because the associated costs are extraordinarily out of balance.  As earlier stated HTML permits a high tolerance for sloppiness, which is one disruptive factor.  The other disruptive factor is that in the standardized language semantics and accessibility are permissively supported but not required by the design of the language.  This means there is little or no motivation for a document author to care.  A more powerful parser will not address the problem.  The costs are transferred to an enforcing party and then transferred back to the document author in the form of government fines, law suits, and boycotts.  The question of power quantity or even power distribution completely misses the point and provides no solution.

I have plenty of other examples of how sloppiness in web technologies is disruptive.  The point of "least powerful" has never addressed this disruption, and in my opinion is partially to blame for the presence of the problem in the first place.

Austin


On 06/27/12, Michael Kay  <mike@saxonica.com> wrote:

> On 21/06/2012 17:44, Costello, Roger L. wrote:
> >Hi Folks,
> >
> >Below is a discussion of the rule of least power and how it applies to XML Schema design. The rule of least power is very cool. Comments welcome.  /Roger
> >
> >
> >The rule of least power says that given a choice of suitable ways to implement something, choose the least powerful way.
> >
> While I can see the arguments, I have to say I am very uncomfortable with this as an architectural principle. A great deal of software design is concerned with building systems that have potential for change, and that means choosing technologies and designs that provide enough headroom to cope with future requirements as well as current requirements. I think this "rule" could be used to justify some really poor design decisions, for example using a text file for data interchange instead of using XML.
> 
> Michael Kay
> Saxonica
Classification: UNCLASSIFIED

Received on Thursday, 28 June 2012 17:29:51 UTC