- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Fri, 19 Sep 2008 20:58:22 -0400
- To: Anne van Kesteren <annevk@opera.com>
- CC: Julian Reschke <julian.reschke@gmx.de>, public-webapps <public-webapps@w3.org>
Anne van Kesteren wrote: > Well 1) quite a few browsers don't add charset automatically yet like > Firefox True, but the reason we started adding it is that it was non-obvious otherwise what charset the string was encoded in (UTF-8? Page encoding?) and this was causing real problems. > and 2) these would always be encoded as UTF-8 because you can > only get these as DOMString. If you know you're getting an XMLHttpRequest, sure. But you might not know that (on the server). Especially as we start allowing cross-site XHR. > UTF-8 can easily be detected server side Really? Reliably? Without guessing? I'm not so convinced, to be honest (especially if a UTF-8 BOM is not included). > and the author could do application/json;charset=UTF-8 to be sure. Such constraints only work in the same-server case, really. > (Also, once JSON becomes something like a native data type we can add > dedicated support for it.) That just sidesteps the issue. I'm not sure we want to create a table of types in XHR that we have to maintain as things come in and out of vogue. It's simpler to have a one-size-fits-all solution and require the very few places that have a broken HTTP server implementation to the point where nothing reasonable works with them to explicitly opt out. -Boris
Received on Saturday, 20 September 2008 00:59:26 UTC