- From: Andrew Sutherland <notifications@github.com>
- Date: Mon, 16 Apr 2018 11:40:33 -0700
- To: w3c/IndexedDB <IndexedDB@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3c/IndexedDB/issues/234/381707058@github.com>
`commit()` as described at https://gist.github.com/inexorabletash/d55a6669a040e92e47c6 seems more appealing and versatile. Being unable to benefit from the optimization if you want your transaction to first assert something about the state of the database with a read limits its utility. And having a mode where events that would normally fire no longer fire and calls that would normally never throw suddenly throw doesn't help from a complexity perspective for an API many view as already overly complex. It would of course be useful to have an idea that a commit() will be coming before the requests start getting issued. Perhaps commit() could be paired with something like `beginBatch()` that could hint to the implementation that it is allowed to buffer all subsequent request responses until the subsequent `commit()` or an also possible `flush()` is processed. At least in Gecko's implementation, this would allow us to greatly optimize against janking as long as no listeners were added to the requests. Obviously, if 100 requests are dispatched and each has a listener, dispatching all 100 in a single go just before the complete/abort happens is potentially much worse than the status quo. That said, beginBatch should still allow for implementation latitude so that the 100 can be spread out over multiple logical tasks/events to avoid dominating the event loop. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/w3c/IndexedDB/issues/234#issuecomment-381707058
Received on Monday, 16 April 2018 18:40:56 UTC