Re: [whatwg] Adding features needed for WebGL to ImageBitmap

On Fri, Jul 12, 2013 at 7:18 AM, Justin Novosad <junov@google.com> wrote:

> Thanks Ken, that makes it much clearer to me.
>
> The main concern I have with all this is the potential for OOM crashes.
>  I'm happy as long as the spec remains vague about what "undue latency"
> means, so we still have the possibility of gracefully degrading performance
> in low memory conditions by evicting prepared/decoded buffers when
> necessary, to only hang on to compressed copies in the in-memory resource
> cache.  As far as the decode cache is concerned, the idea I have is to give
> priority to pixel buffers that are held by ImageBitmap objects, and only
> evict them as a last resort.
>

So, you are OK with double storing of the image data?
This will make Canvas 2D a lot slower since it didn't have to care about
this before so ImageBitmap didn't have to store the original data. I'd
rather see an exception if the options are not canvas 2d-compatible than
having to double the memory and introduce extra processing.



>
> On Thu, Jul 11, 2013 at 4:24 PM, Kenneth Russell <kbr@google.com> wrote:
>
>> On Thu, Jul 11, 2013 at 8:29 AM, Justin Novosad <junov@google.com> wrote:
>> >
>> >
>> > On Wed, Jul 10, 2013 at 9:37 PM, Rik Cabanier <cabanier@gmail.com>
>> wrote:
>> >>
>> >> On Wed, Jul 10, 2013 at 5:07 PM, Ian Hickson <ian@hixie.ch> wrote:
>> >>
>> >> > On Wed, 10 Jul 2013, Kenneth Russell wrote:
>> >> > >
>> >> > > ImageBitmap can cleanly address all of the desired use cases
>> simply by
>> >> > > adding an optional dictionary of options.
>> >> >
>> >> > I don't think that's true. The options only make sense for WebGL --
>> >> > flipping which pixel is the first pixel, for example, doesn't do
>> >> > anything
>> >> > to 2D canvas, which works at a higher level.
>> >> >
>> >> > (The other two options don't make much sense to me even for GL. If
>> you
>> >> > don't want a color space, don't set one. If you don't want an alpha
>> >> > channel, don't set one. You control the image, after all.)
>> >> >
>> >> >
>> >> > > I suspect that in the future some options will be desired even for
>> the
>> >> > > 2D canvas use case, and having the dictionary already specified
>> will
>> >> > > make that easier. There is no need to invent a new primitive and
>> means
>> >> > > of loading it.
>> >> >
>> >> > If options make sense for 2D canvas, then having ImageBitmap options
>> >> > would
>> >> > make sense, sure.
>> >> >
>> >> >
>> >> yeah, these options seem a bit puzzling.
>> >> From the spec:
>> >>
>> >> An ImageBitmap object represents a bitmap image that can be painted to
>> a
>> >> canvas without undue latency.
>> >>
>> >> note: The exact judgement of what is undue latency of this is left up
>> to
>> >> the implementer, but in general if making use of the bitmap requires
>> >> network I/O, or even local disk I/O, then the latency is probably
>> undue;
>> >> whereas if it only requires a blocking read from a GPU or system RAM,
>> the
>> >> latency is probably acceptable.
>> >>
>> >> It seems that people see the imageBitmap as something that doesn't just
>> >> represent in-memory pixels but that those pixels are also preprocessed
>> so
>> >> they can be drawn quickly. The latter is not in the spec.
>> >>
>> >> I think authors will be very confused by these options. What would it
>> mean
>> >> to pass a non-premultiplied ImageBitmap to a canvas object? Would the
>> >> browser have to add code to support it or is it illegal?
>> >> Maybe it's easier to add an optional parameter to createImageBitmap to
>> >> signal if the ImageBitmap is for WebGL or for Canvas and disallow a
>> Canvas
>> >> ImageBitmap in WebGL and vice versa.
>> >
>> >
>> > You are implying a pretty heavy imposition as to what constitutes undue
>> > latency.
>> > I think the spec should stay away from forcing implementations to pin
>> > decoded image buffers in RAM (or on the GPU), so that the browser may
>> have
>> > some latitude in preventing out of memory exceptions. In its current
>> form,
>> > the spec implies that it would be acceptable for an implementation to
>> > discard the decoded buffer and only retain the resource in encoded form
>> in
>> > RAM.  Do we really need to make further optimizations explicit? For
>> example,
>> > an implementation could prepare the image data for use with WebGL the
>> first
>> > time it is drawn to WebGL, and keep it cached in that state. If the same
>> > ImageBitmap is subsequently drawn to a 2D canvas, then it would use the
>> > non-WebGLified copy, which may be cached, or may require re-decoding the
>> > image. No big deal.
>>
>> The step of preparing the image for use, either with WebGL or 2D
>> canvas, is expensive. Today, this step is necessarily done
>> synchronously when an HTMLImageElement is uploaded to WebGL. The
>> current ImageBitmap proposal would still require this synchronous
>> step, so for WebGL at least, it provides no improvement over the
>> current HTML5 APIs. A major goal of ImageBitmap was to allow Web
>> Workers to load them, and even this ability currently provides no
>> advantage over HTMLImageElement.
>
>
>> > Fundamental question: Do we really need the caller to be able to specify
>> > what treatments need to be applied to prepare an image for WebGL, or is
>> it
>> > always possible to figure that out automatically?
>>
>> It is never possible to figure out automatically how the image needs
>> to be treated when preparing it for use with WebGL. I'm not sure where
>> that idea came from. On the contrary, there are eight possibilities
>> (2^3), and different applications require different combinations.
>>
>> -Ken
>>
>
>

Received on Sunday, 14 July 2013 23:34:31 UTC