W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2013

Re: Reading image bytes to a PNG in a typed array

From: Florian Bösch <pyalot@gmail.com>
Date: Sun, 27 Jan 2013 11:23:20 +0100
Message-ID: <CAOK8ODi33d1W74TNoB_Y1SUHp3FMyWtPdDKkhzWBiFcPmOJ3vg@mail.gmail.com>
To: Gregg Tavares <gman@google.com>
Cc: Kyle Huey <me@kylehuey.com>, Glenn Maynard <glenn@zewt.org>, Webapps WG <public-webapps@w3.org>
If we have a context we can use in a worker on which we can set the
framebuffer to which we would bind the texture and which we can readPixel
back into bytes, which we would stick into a surfaceless 2D canvas so we
can kick off the encoding to png (or just implement it in JS after
implementing zlib) then yes, it could be done in workers.

On any account, these things are all not yet available. I've meanwhile
implemented the required functionality via the long chain of clutches that
takes 6 4092x4092 textures (albed, normal, specular, specularity,
occlusion, height) and packes them into a tarfile and offers it as a
blob-url to the user. It takes about 30 seconds to pack up, during which
the page is pretty much frozen since it's 100% synchronous. The outcome is
about 50mb in size. But hey, at least it works, after a fashion.

On Sat, Jan 26, 2013 at 7:18 PM, Gregg Tavares <gman@google.com> wrote:

> Could this be solved in workers?
> x) Create canvas, set to desired size
> x) Create 2D context
> x) Create imageData object
> x) Create a WebGL framebuffer object
> x) Attach texture as color target to framebuffer
> x) read back pixels into canvas2d's imageData.data member
> x) ctx.putImageData into the canvas
> 1) Set CanvasProxy (or whatever it's called) to the size you want
> 2) Draw Texture
> 3) call CanvasProxy's toDataURL('image/png')
> 4) Set the CanvasProxy back to the original size
> 5) snip off the mime/encoding header
> 6) implement base64 decode in JS and decode to Uint8Array
> Less steps and it's now async as well.
> On Wed, Jan 16, 2013 at 8:02 AM, Florian Bösch <pyalot@gmail.com> wrote:
>> Whatever the eventual solution to this problem, it should be the user of
>> the API driving the decision how to get the data.
>> On Wed, Jan 16, 2013 at 4:56 PM, Kyle Huey <me@kylehuey.com> wrote:
>>> On Wed, Jan 16, 2013 at 7:50 AM, Glenn Maynard <glenn@zewt.org> wrote:
>>>> On Wed, Jan 16, 2013 at 9:40 AM, Florian Bösch <pyalot@gmail.com>wrote:
>>>>> Perhaps we should think of a better scheme to export data than
>>>>> toFoo(). Maybe toData('url'), toData('arraybuffer') toData('blob') or
>>>>> perhaps toData(URL), toData(ArrayBuffer) or toData(Blob). I tend to think
>>>>> that if you're starting to write toA, toB, toC, toX methods on an object,
>>>>> you've not thought this really trough what's a parameter, and what's a
>>>>> method.
>>>> We should be avoiding the need to return data in a bunch of different
>>>> interfaces in the first place.  If the data is large, or takes a long or
>>>> nondeterministic amount of time to create (eg. something that would be
>>>> async in the UI thread), return a Blob; otherwise return an ArrayBuffer.
>>>>  The user can convert from there as needed.
>>> Well, the problem is that we fundamentally screwed up when we specced
>>> Blob.  It has a synchronous size getter which negates many of the
>>> advantages of FileReader extracing data asynchronously.  For something like
>>> image encoding (that involves compression), where you have to perform the
>>> operation to know the size, Blob and ArrayBuffer are effectively
>>> interchangeable from the implementation perspective, since both require you
>>> to perform the operation up front.
>>> - Kyle
Received on Sunday, 27 January 2013 10:23:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:13:58 UTC