- From: David Woolley <david@djwhome.demon.co.uk>
- Date: Sun, 6 Jul 2003 20:21:26 +0100 (BST)
- To: www-style@w3.org
> Who are the people who can't distinguish between HTML, CSS, etc, Most of people who ask (off topic) "how to" questions on the www-html mailing list (HTML is the popular code word for anything that can be done with IE). You don't get that sort of question here because people who post here tend to have to understand the distinction to realise that this list might exist. .... why can't they distinguish between Probably because the market is immature and you can sell yourself as a web designer without any real knowledge of the subject. > those technologies, and why is it bad? When an end user can't tell .... that multiple technologies are Because it normally means that a designer doesn't understand the medium they are using. (One intended characteristic of that medium was actually that there shouldn't really be people who were only producers or only consumers; the original design of HTML was simple and deliberately rejected features like colours (colour explicitly) to make it easy for anyone to create documents. Although commercial producers have largely re-asserted their position, users still have more power than for most media.) When a designer doesn't understand the difference between CSS and HTML, they start putting important information into the CSS which cannot be processed by some users or automata. > What standards have, in the past, "degenerated" because they were made .... able to do more things? What I can't be sure, but I suspect you are in your twenties. Whilst computing hardware makes real advances, computing software tends to go round in circles, with every step being promoted as progress and with people often failing to realise that the new is just a re-hash of the old. The changes in hardware can make some things possible, or possible for a wider audience, of course. If you are quite young, you may not have seen this process, but.... HTML was initially radical because it rejected presentational characterisics, but is being turned back into something equivalent to the desk top publishing languages that actually pre-dated it. (Those languages may have been only machine readable, but HTML is heading that way.) LDAP stands for *lightweight* directory access protocol, but even before it was released it started to become more complex than the X.500 protocol from which it was cut back (because that was considered to have grown too complex). PL/I was designed as universal programming language, and was used for some time, but eventually died out. CPL was another such language, which never fully got implemented. It produced a very simple, untyped, language, called BCPL as a tool, that did get some use and was an ancestor of C. C got more and more strongly typed, and then became C++, which was even more strongly typed still. Whilst CPL didn't have the full object concept, I believe it had a lot of the data abstraction that is in C++. Other historic languages, of that vintage, had it. People then started using simple untyped (although in a different, but not particularly new, way) languages like ECMAScript and VBScript. XML was intended to be a simplified SGML, but is now becoming more complex, and is also adopting characteristics of ASN.1, which would also have been considered more complex. At one stage, people hand coded PostScript, but they now just using it as a printer interface language, even though it is very powerful scripted graphics language. (There is a big fashion element here as well; HTML partly succeeded because PDF, a sort of static PostScript, was considered old fashioned.) The real technological advances in the nineteenth and early twentieth centuries got the public into a state where you could sell any change as being "progress".
Received on Sunday, 6 July 2003 15:45:57 UTC