- From: Dave J Woolley <DJW@bts.co.uk>
- Date: Tue, 15 Aug 2000 12:43:53 +0100
- To: www-html@w3.org
> From: Cavre [SMTP:cavre@mindspring.com] > > For example a HTML parser would ignore anything in between > <smgl></smgl> or <xml></xml> but then again it might not and [DJW:] HTML parsers are explicitly required to do the exact opposite. This makes sense if you consider HTML as a true markup language, as the underlying plain text should still make some sense. The problem arises from ignoring the origins of HTML and trying to add information as text which should not be text. Consider, for example, the (deprecated) font element. If the rules for unrecognized elements weren't to ignore the markup and render the contents, most recent web pages would be blank in modern browsers, whereas, in some cases, they are more readable because the markup as inappropriate. Incidentally, HTML is SGML! > attempt to display these as standard text. Depends on the parser. > I will agree that backwards compatibility might be a strong reason > why you don't want a validator/parser to simply ignore markup for > another vocabulary. > [DJW:] -- --------------------------- DISCLAIMER --------------------------------- Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of BTS. >
Received on Tuesday, 15 August 2000 07:44:11 UTC