Re: Principle of Least Power

Upon giving the new document a quick analysis (which I like!), it
appears to be doing two things:

1) Analyzing languages at their level in the "Chomksy Hierarchy" of
Turing completeness, not worse case time-complexity results. But
obviously this note goes beyond saying "Use if possible languages whose
formal power is lower in the Chomsky Hierarchy" -  references to
analysability of languages, i.e. the reference to Haskell etc.

2) It is actually focusing on something beyon Turing-completeness - the
ability of a program or data expressed in a particular language to be
"analysed" a prior without running, which is more of a generalization of
the above and does (I'd guess) cover worse case time-complexity results.

 This could be explicitly stated somewhere, as by saying, "By power we
mean the ability of data or a program expressed in that language to be
analysed as to guarantee certain behavior." Would be tempted to add
"Since we know a whether a non-Turing complete program will halt or not,
a program in a non-Turing complete language would guarantee certain
behavior while a Turing-complete langauge would not."

So lesser powerful languages in a somewhat contradictory manner give you
*more information* about themselves than more powerful languages in
theory. Obviously meta-data would tie in here, and this ties into the
Self-describing Web note. Worse-case time complexity and
Turing-completeness would then be in sorts of information that you would
know about a program or data a priori - i.e. before running it.

Then if people wanted to be more particular, somewhere in section 1
state "In particular, we are going to look at power as expressed by
Turing-completeness in this section, not other aspects of power such as
worse-case time complexity results."

And then to address Len's concerns, one could state something along the
lines of "By power we do not mean the subjective feeling of power
particular programming languages or data formats give their users, but
the amount of information a particular language tells us about the
programs or data expressed in that language. By setting constraints as
to what can be expressed and making themselves capable of being
analyzed, certain languages while being less powerful in terms of
Turing-completeness or other forms of power, actually give the Web more
information about themselves when they can guarantee certain behavior. "

And everybody wants more information about everybody else :)


Henry S. Thompson wrote:

>Hash: SHA1
>'Expressive' and 'power' are loaded words, what I guess I was asking
>for was more precision in their use.
>It's important to keep in mind, for example, that not-Turing-complete
>does _not_ mean 'safe' or "can't be used for a DoS attack" -- XML DTDs
>are nowhere near Turing-complete, but, as explained by me [1], the
>general DTD satisfiability problem is NP-hard none-the-less.  The
>same holds for
>   * RELAX NG with W3C XML Schema Datatypes
>   * Schematron
>   * W3C XML Schema
>   * Namespace Routing Language
>as explained by Robert Lyons [2].
>Another way to put this is that _either_ we should restrict what we
>say to "avoid Turing-complete languages if you don't need them" _or_
>we should be clear about how you measure "power" so you can _tell_
>whether language A is or is not less powerful than language B.
>Worst-case time complexity of the recognition problem would be my
>first candidate for an appropriate measurement, but I'm not sure
>whether that applies in all the cases we'd like PLP to cover, or
>whether it's even _known_ for all the languages we'd like to cover.
>- -- 
> Henry S. Thompson, HCRC Language Technology Group, University of Edinburgh
>                     Half-time member of W3C Team
>    2 Buccleuch Place, Edinburgh EH8 9LW, SCOTLAND -- (44) 131 650-4440
>            Fax: (44) 131 650-4587, e-mail:
>                   URL:
>[mail really from me _always_ has this .sig -- mail without it is forged spam]
>Version: GnuPG v1.2.6 (GNU/Linux)

Received on Wednesday, 8 February 2006 18:13:59 UTC