RE: Principle of Least Power

That confuses me, Harry.  Are you saying that XML Schemas being more powerful 
and more expressive than DTDs (they are) also provide more information?

Wouldn't that contradict the principle?

I get the halting example.  The language can't be used to determine 
if an answer will return.  In that sense of information (the 
probability of halting), it is undecidable.   An analog to this 
discussion occurred recently on the CG list concerning the 
"reality or intuition" of infinities.  Practical applications 
don't care but schools of mathematics bifurcate around that debate 
(platonism vs intuitionism vs constructivism and so on).  All 
computer systems are finite if they work; they may use concepts 
of infinities but these are functional (eg. limits, or the empty 
set is a member of all sets).

Let me try another example:

If a language automatically casts data types, thus hiding from the 
user what it is doing, it exposes in the syntax less information 
but has more power in the implementation.  So in the sense that it 
hides that under the covers, it is more *powerful*.  In what it 
documents in the syntax of the program, it is hiding information. 
One of the original principles used to sell object-orientation 
was 'information hiding'.

I'm looking for an example I can explain to the pointy-haired guy 
without him rolling his eyes.  "Trust me" isn't good enough.  If 
we have to explain the halting problem, he will say "you are making 
my head hurt".  That is not a good thing.

len


From: Harry Halpin [mailto:hhalpin@ibiblio.org]

   Point was that it seems to me the "power" in this note isn't
Turing-completeness only, but that often less powerful languages give
you *more information* than more powerful ones. So I'm not sure if
ranking a bunch of things according to Turing-completeness is really all
that useful, although it helps!

So an XML Schema gives you more information (i.e. it has more types,
substitution groups, numeric ranges etc.) than a DTD, and you should use
XML Schemas instead of DTDs even if both can be implemented as regular
languages  (Now the RELAX NG question is a whole other post...). Same
with programming in Haskell versus C - although both languages are
Turing complete, Haskell would give you more information via its typing
system and pure functional architecture about itself, and is so more 
amendable to analysis without looking at the code or running the
program. I think this way of thinking about it help connects sections 2
and 3 to each other.

One example of this idea of information is Turing-completeness - if you
know a language  is Turing-complete, then you know whether it halts or
not, while for Turing complete languages "you don't and can't know" -
which translates into *less information* even if the formalism is *more
powerful.*

Ditto for traditional complexity computer science re Henry - if I tell
you a problem is of class L (solvable in logarithmic time), than if I
tell you it's solvable in P (polynomial), and even more than if I told
you if it was solvable in NP (non-deterministic polynomial time) , since
we don't know if P=NP, but we do have a pretty good idea what L is :)

I don't think this requires any major amendments to said document, maybe
a sentence or two about this as suggested earlier might help clarify
Henry's issues, which confused me as well when I first read it, as I
thought it was talking about only Turing-completeness - and so the
Haskell bit  seemed a bit weird, but in retrospect it makes sense.

Received on Wednesday, 8 February 2006 22:28:29 UTC