[whatwg] Offscreen canvas (or canvas for web workers).

On Sun, Mar 14, 2010 at 1:43 AM, Maciej Stachowiak <mjs at apple.com> wrote:
>
> On Mar 13, 2010, at 12:30 PM, Jonas Sicking wrote:
>
>> On Sat, Mar 13, 2010 at 12:09 PM, Oliver Hunt <oliver at apple.com> wrote:
>>>
>>> On Mar 13, 2010, at 9:10 AM, Jonas Sicking wrote:
>>>>
>>>> There is a use case, which I suspect is quite common, for using
>>>> <canvas> to manipulate files on the users file system. For example
>>>> when creating a photo uploader which does client side scaling before
>>>> uploading the images, or for creating a web based GIMP like
>>>> application.
>>>>
>>>> In this case we'll start out with a File object that needs to be read
>>>> in to a <canvas>. One solution could be to read the File into memory
>>>> in a ByteArray (or similar) and add a synchronous
>>>> canvas2dcontext.fromByteArray function. This has the advantage of
>>>> being more generic, but the downside of forcing both the encoded and
>>>> decoded image to be read into memory.
>>>
>>> Honestly i think ?nice and consistent way for this work work would simply
>>> be to support
>>> someImage.src = someFileObject
>>>
>>> Which would be asynchronous, and support all the image formats the
>>> browser already supports.
>>
>> That is already possible:
>>
>> someImage.src = someFileObject.urn;
>>
>> However this brings us back to the very long list of steps I listed
>> earlier in this thread.
>
> I think it is cleaner to have an asynchronous image load operation (as shown
> above) and then a synchronous image paint operation, rather than to
> introduce a asynchronous paint operation directly on the 2D context.
>
> I don't think there is any sane way to add an asynchronous draw command to
> the 2D context, given that all the existing drawing commands are
> synchronous. What happens if you do an async paint of a File, followed by
> synchronous painting operations? It seems like the only options are to force
> synchronous I/O, give unpredictable results, or break the invariants on
> current drawing operations (i.e. the guarantee that they are complete by the
> time you return to the event loop and thus canvas updates are atomic)
>
> Separating the async I/O from drawing allows the 2D context to remain 100%
> synchronous and thus to have sane semantics.

One way to do it would be to have an function somewhere, not
necessarily on the 2D context, which given a Blob, returns an
ImageData object. However this still results in the image being loaded
twice into memory, so would only really help if you want to operate on
an ImageData object directly.

> I think the number of steps is not the primary concern here. The issue
> driving the proposal for offscreen canvas is responsiveness - i.e. not
> blocking the main thread for a long time. It seems to me that number of
> steps is not the main issue for responsiveness, but rather whether there are
> operations that take a lot of CPU and are done synchronously, and therefore,
> whether it is worthwhile to farm some of that work out to a Worker. I/O is
> not really a major consideration because we already have ways to do
> asynchronous I/O.

I agree that the number of steps is not important for responsiveness
or performance (though it is for complexity). However several of those
steps seemed to involved non-trivial amount of CPU usage, that was the
concern expressed in my initial mail.

At the very least I think we have a skewed proposal. The main use
cases that has been brought up are scaling and rotating images.
However the proposal is far from optimal for fulfilling that use case.
For scaling, it's fairly complex and uses more CPU cycles, both on the
main thread, and in total, than would be needed with an API more
optimized for that use case. For rotating it doesn't do anything.

/ Jonas

Received on Sunday, 14 March 2010 18:22:59 UTC