- From: Frank Ellermann <nobody@xyzzy.claranet.de>
- Date: Fri, 21 Sep 2007 16:04:06 +0200
- To: www-validator@w3.org
olivier Thereaux wrote: > I guess that the implementation considers that, when using text/xml > without charset, the us-ascii charset is set and therefore the fbc > option (which triggers fallback only if there is no charset given at > all) does not apply. I think this should be relaxed. If RFC 2616 and RFC 3023 are really incompatible for text/xml without charset you could for the moment "fix" this fbc oddity by documenting it, documented "bugs" are "features" :-) >> But what I'd really prefer would be a mode to validate documents >> "as is" independent of any transport oddities. Ideally this mode >> would be the new default, and enabling to check a document in the >> context of its transport to the validator could be an option. > I don't think I can agree with you here. MIME and HTTP may be causing > issues for a few ill-configured or ill-coded servers and browsers, > but making it a default to ignore the very important encoding and > media type info they convey, and using sniffing instead, sounds like > a bad idea to me. Browsers and servers typically aren't under the control of document authors, and fixing the numerous errata in RFC 2616 will take years. Your typical user is a document author, not a browser or standard developer. The issue isn't reporting a warning for conflicting document and "meta" charset info, the issue is to treat this as a _fatal_ error when that's unnecessary and typically unrelated to what users wish to validate and can fix, i.e. their document. The Web services of the validator should be a simple emulation of a local installation, where you'd look at file://localhost/ URIs for validated documents. And file: URIs have no http: headers. It's a usability question: Don't put the blame on ordinary users if IETF, W3C, Web hosters, or browser developers screw up. Frank
Received on Friday, 21 September 2007 14:04:45 UTC