- From: Garrett Smith <dhtmlkitchen@gmail.com>
- Date: Mon, 11 Aug 2008 23:21:14 -0700
- To: "Oliver Hunt" <oliver@apple.com>
- Cc: "Web Applications Working Group WG" <public-webapps@w3.org>
On Mon, Aug 11, 2008 at 9:04 PM, Oliver Hunt <oliver@apple.com> wrote: >>> Sorry, the "and browser" at the end was a typo. I meant to say, "in the >>> browser". The reason synchronous access to the disk is a bad idea is >>> that >>> if the operation takes too long, a big file, a slow network home >>> directory, >> >> Then: >> >> function readFile(file) { >> // 1. Check the fileSize property. >> if(file.fileSize > 100000) { >> generateFileWarning(file); >> return; >> } >> // 2. read file asynchronously. >> setTimeout(readFile, 1); >> } >> >> seems to completey address the problems you mentioned in only a few >> lines of code. > > Alas this does not resolve the problem as you are making the implicit > assumption that because a 100k file access may be fine on your local system, > or even on a network drive on your local network it may not be for someone > using a network drive over a the net or on an otherwise slow connection > (this isn't necessarily an uncommon scenario, a person using a vpn on a 56k > modem could hit this or a person with a poor wifi connection, etc). The > file limit being 100k is a magic number that could be replaced by any > arbitrary value, but the simplest way to break any size assumption is to > consider what would happen if the file you were attempting to load were on a > network drive from a server that just fell over. > I missed the point with the server falling over. > In general APIs that require the developer to make this sort of decision are > poor as they tend to result in code that always uses the synchronous API > (because it's "easy") because it works fine on the developers system, even > though it may not be even remotely useful for many people. The other > scenario is that the developer does do some testing, but doesn't test every > possible configuration, once again resulting in some people being unable to > use the site/application. > The other alternative is to have a file sent to the server. The downside here is when the user does not know that his jpg file that he uploads is over 1mb. So there's a round trip and it's a slow U/X. That's where the alert("you're file is too big.") provides faster feedback than a round trip to a server (with a very large file on the outbound). > The other problem is that setTimeout does not result in async javascript > execution it merely delays the synchronous execution of a script. I've just tried to upload a 1.1mb log file from my hard drive and had no issue reading. Using Firefox an older mac. Reading a 46mb files was slow. I somewhat expected that. Uploading large files will take time. > In your > example you are returning from readFile (to prevent your code from blocking > the UI) and then the moment the timer is triggered a synchronous read of a > definitely large file will occur, resulting in the UI being blocked. > That is not what the code was intended to do. I realize that I had a recursive call in the function readFile. The intention is to setTimeout(getFileContents, 1); A useful error might be: "error: file size over allowed limit." Or in another context: "warning: this file may take time to upload" > The only way to prevent such UI blocking is to have an async api regardless > as to whether you have a synchronous one, meaning that the synchronous api > will only exist to either increase complexity (as developers will need to > implement logic to fallback to, and implement, the async I/O in addition to > the synchronous I/O logic as your above example would need to), or to > produce sites that fail to account for non-fast I/O (which thus destroy the > end user experience). > Sounds like async reads would avoid the problem of locking the UI. Why don't you post up your ideas? Garrett > --Oliver > >
Received on Tuesday, 12 August 2008 06:21:49 UTC