- From: Harry Halpin <hhalpin@ibiblio.org>
- Date: Sun, 12 Feb 2006 10:43:46 -1000
- To: "Bullard, Claude L (Len)" <len.bullard@intergraph.com>
- Cc: www-tag@w3.org
More power does not always equal less information. For example, both Haskell and C++ are Turing-complete, but you can argue pretty well that Haskell via its type system/monads/etc. gives you *more information* even though they are on the same level of the Chomksy Hierarchy. The Chomksy Hierarchy is the ranking of languages from regular languages to Turing-complete and recursive languages. It goes more confusing if you have something of a *lower rank* (DTDs?) in the Chomksy Hierarchy that you want to argue provides *less information* than something of a *higher rank* (XML Schemas?). What I am saying is that in general knowing Turing-completeness gives you some information - whether the program will halt or not given the halting problem. But the space of all possible informations may not be objectively measurable - although I do think Kolmogorov complexity/information theory has something to say about that. However, what we could argue is that knowing some technologies place in the Chomksy Hierarchy only gives you some information, but that is far from the only metric. We can argue XML Schemas give more information by saying that their typing information and annotations (not present in DTDS) allow them to express more information even though they may be higher in the Chomksy hierarchy. Bullard, Claude L (Len) wrote: > That confuses me, Harry. Are you saying that XML Schemas being more powerful > and more expressive than DTDs (they are) also provide more information? > > Wouldn't that contradict the principle? > > I get the halting example. The language can't be used to determine > if an answer will return. In that sense of information (the > probability of halting), it is undecidable. An analog to this > discussion occurred recently on the CG list concerning the > "reality or intuition" of infinities. Practical applications > don't care but schools of mathematics bifurcate around that debate > (platonism vs intuitionism vs constructivism and so on). All > computer systems are finite if they work; they may use concepts > of infinities but these are functional (eg. limits, or the empty > set is a member of all sets). > > Let me try another example: > > If a language automatically casts data types, thus hiding from the > user what it is doing, it exposes in the syntax less information > but has more power in the implementation. So in the sense that it > hides that under the covers, it is more *powerful*. In what it > documents in the syntax of the program, it is hiding information. > One of the original principles used to sell object-orientation > was 'information hiding'. > > I'm looking for an example I can explain to the pointy-haired guy > without him rolling his eyes. "Trust me" isn't good enough. If > we have to explain the halting problem, he will say "you are making > my head hurt". That is not a good thing. > > len > > > From: Harry Halpin [mailto:hhalpin@ibiblio.org] > > Point was that it seems to me the "power" in this note isn't > Turing-completeness only, but that often less powerful languages give > you *more information* than more powerful ones. So I'm not sure if > ranking a bunch of things according to Turing-completeness is really all > that useful, although it helps! > > So an XML Schema gives you more information (i.e. it has more types, > substitution groups, numeric ranges etc.) than a DTD, and you should use > XML Schemas instead of DTDs even if both can be implemented as regular > languages (Now the RELAX NG question is a whole other post...). Same > with programming in Haskell versus C - although both languages are > Turing complete, Haskell would give you more information via its typing > system and pure functional architecture about itself, and is so more > amendable to analysis without looking at the code or running the > program. I think this way of thinking about it help connects sections 2 > and 3 to each other. > > One example of this idea of information is Turing-completeness - if you > know a language is Turing-complete, then you know whether it halts or > not, while for Turing complete languages "you don't and can't know" - > which translates into *less information* even if the formalism is *more > powerful.* > > Ditto for traditional complexity computer science re Henry - if I tell > you a problem is of class L (solvable in logarithmic time), than if I > tell you it's solvable in P (polynomial), and even more than if I told > you if it was solvable in NP (non-deterministic polynomial time) , since > we don't know if P=NP, but we do have a pretty good idea what L is :) > > I don't think this requires any major amendments to said document, maybe > a sentence or two about this as suggested earlier might help clarify > Henry's issues, which confused me as well when I first read it, as I > thought it was talking about only Turing-completeness - and so the > Haskell bit seemed a bit weird, but in retrospect it makes sense. > >
Received on Sunday, 12 February 2006 20:43:55 UTC