- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Thu, 28 Oct 2010 23:37:54 -0400
- To: James Robinson <jamesr@google.com>
- CC: Chris Rogers <crogers@google.com>, Maciej Stachowiak <mjs@apple.com>, Geoffrey Garen <ggaren@apple.com>, Darin Fisher <darin@chromium.org>, Web Applications Working Group WG <public-webapps@w3.org>, Anne van Kesteren <annevk@opera.com>, Eric Uhrhane <ericu@google.com>, michaeln@google.com, Alexey Proskuryakov <ap@webkit.org>, Chris Marrin <cmarrin@apple.com>, jorlow@google.com
On 10/28/10 9:11 PM, James Robinson wrote: > I think a good rule for any web API is that the user's needs come before > the author's needs. And the author's before the implementor's, right? OK, let's take that as given. > In this case there is a very large amount of > content out there today that uses XMLHttpRequest to download data, > sometimes significant amounts of data, and that use .responseText > exclusively to access that data. Agreed. > Adding a new feature to the API that > causes this use case to be worse for the user (by requiring it to use > twice as much memory) In a particular simplistic implementation, right? > seems like a clear non-starter to me - that would > be putting authors before users. More precisely, putting authors before implementors, seems to me... > Would you accept a new DOM feature that required each node to use twice as much memory? That _required_? Probably not. But responseArrayBuffer doesn't require twice as much memory if you're willing to make other tradeoffs (e.g. sync read in the bytes from non-RAM storage) in some situations. > The memory use and heap pressure caused by XHR has been an issue for Chrome in the past and > our current implementation is pretty carefully tuned to not preserve > extra copies of any data, not perform redundant text decoding > operations, and to interact well with the JavaScript engine. I understand that. > It's true that it might be a convenient API for authors to provide the > response data in all formats at all times. OK, we agree on that. > However this would not benefit any content deployed on the web right now that uses responseText > exclusively and would make the user experience unambiguously worse. Seems to me that if all you care about is the user experience being no worse for content that only uses responseText you can just dump the raw bytes to disk and not worry about the slowness of reading them back... You could also have a way for authors to hint to you to NOT thus dump them to disk (e.g. a boolean they set before send(), which makes you hold on to the bytes in memory instead, but doesn't cause any weird exception-throwing behavior). Is there any benefit in pursuing that line of thought or do you consider it a non-starter? If the latter, why? > Instead we need to find a way to provide new capabilities in a way > that does not negatively impact what is already out there on the web. Ideally, yes. In practice, new capabilities are provided by various specs all the time that negatively impact performance, sometimes even when carefully optimized around. Such is life. > Within this space I'm sure there are several good solutions. OK, would those be the ones listed near the beginning of this thread? > As another general note, I think it's rather unfortunate how many > different responsibilities are currently handled by XMLHttpRequest. Sure, we all agree on that. We're somewhat stuck with it, sadly. > I'm not convinced that we need to worry overly much about legacy > libraries mishandling .responseArrayBuffer. Any code that tries to > handle .responseArrayBuffer will by definition be new code and will have > to deal with the API whatever that ends up being. So what you're saying is that code that wants to use .responseArrayBuffer can't be using jquery. That seems like a somewhat high adoption bar for .responseArrayBuffer, no? > Code that wants to use .responseText can continue to do so, but it won't be able to use > .responseArrayBuffer as well. Seems like a pretty simple situation as > such things go. I really have the sense I'm not getting through here. You seem to be assuming that a single entity is responsible for all the code that runs on the page. That may be the case at Google. It's commonly NOT the case elsewhere. So things that break some code due to things that some other code did that seemed entirely reasonable are something we should be trying to not introduce if we can avoid them. I'm happy to try to find a better solution here if you think there are insurmountable implementation difficulties in supporting the simple and author-intuitive API. I'm happy to complicate the API somewhat if that makes it more implementable. I'm not so happy to make it fragile, though. -Boris
Received on Friday, 29 October 2010 03:38:31 UTC