[whatwg] <canvas> feedback

(Note: I started responding to this feedback last week, so this is missing 
responses to feedback sent in the last few days. Sorry about that. I'll 
get to that feedback in due course as well!)

On Mon, 3 Mar 2014, Justin Novosad wrote:
> 
> Say you create a new document using 
> document.implementation.createHTMLDocument(), you get a document without 
> a browsing context. This means that style and layout will never be 
> calculated on the document.  Some of those calculations are context 
> dependent, so they can't even be resolved.  Now, what about canvas 
> elements? If JS code draws to a canvas that is in a document with no 
> browsing context, what should happen?

It should draw. In theory, anywhere in the canvas API where it depends on 
computed styles, it has prose saying what should happen if the computed 
style cannot be used. This is needed for display:none canvases, for 2D 
contexts in workers, and for the case you describe.


> For example, there is no locale for font family resolution

I'm not clear on what you mean by "locale" here. What is the locale that a 
displayed <canvas> in a Document in a browsing context has, that a 
non-displayed <canvas> outside a Document and without a browsing context 
does not have?


> and it is not possible to resolve font sizes in physical length units 
> unless the document is associated with a view.

Why not? The canvas has a pixel density (currently always 1:1), no?


> My 2 cents: specifying fallback behaviors for all use cases that are 
> context dependent could be tedious and I have yet to see a real-world 
> use case that requires being able to paint a canvas in a frame-less 
> document. Therefore, I think the spec should clearly state <canvas> 
> elements that are in a document without a browsing context are unusable.  
> Not sure what the exact behavior should be though.  Should an exception 
> be thrown upon trying to use the rendering context? Perhaps canvas draws 
> should fail silently, and using the canvas as an image source should 
> give transparent black pixels?

As far as I can tell, this is all already specified, and it just gets 
treated like a normal canvas.


On Wed, 5 Mar 2014, Rik Cabanier wrote:
> 
> Testing all browsers (except IE since 
> document.implementation.createHTMLDocument() doesn't work) they seem to 
> handle canvas contexts with no browsing context except when you use 
> text. Chrome crashes, firefox throws an exception and Safari draws the 
> text with a very small scale

I don't really understand why this is problematic in practice. What does a 
browsing context provide that is needed for rendering text that a user 
agent couldn't fake for itself in other contexts? We're definitely going 
to need text in worker canvases.


On Thu, 6 Mar 2014, Justin Novosad wrote:
> 
> Thanks for checking.  The reason I started this thread is that I just 
> recently solved the crash in Chrome, and I wasn't satisfied with my 
> resolution.  I just added an early exit, so Chrome 35 will fail silently 
> on calls that depend on style resolution when the canvas has no browsing 
> context.  So now we have three different behaviors. Yay!
> 
> I don't think the Safari behavior is the right thing to do because it 
> will never match the developer's intent.

I agree. The developer's intent is that text be drawn as specified in the 
API. Why would we do anything else?


On Wed, 12 Mar 2014, Rik Cabanier wrote:
> On Wed, Mar 12, 2014 at 3:44 PM, Ian Hickson wrote:
> > On Thu, 28 Nov 2013, Rik Cabanier wrote:
> > > On Thu, Nov 28, 2013 at 8:30 AM, Jürg Lehni wrote:
> > > >
> > > > I meant to say that it I think it would make more sense if the 
> > > > path was in the current transformation matrix, so it would 
> > > > represent the same coordinate values in which it was drawn, and 
> > > > could be used in the same 'context' of transformations applied to 
> > > > the drawing context later on.
> > >
> > > No worries, it *is* confusing. For instance, if you emit coordinates 
> > > and then scale the matrix by 2, those coordinates from 
> > > getCurrentPath will have a scale of .5 applied.
> >
> > That's rather confusing, and a pretty good reason not to have a way to 
> > go from the current default path to an explicit Path, IMHO.
> >
> > Transformations affect the building of the current default path at 
> > each step of the way, which is really a very confusing API. The Path 
> > API on the other hand doesn't have this problem -- it has no 
> > transformation matrix. It's only when you use Path objects that they 
> > get transformed.
> 
> This happens transparently to the author so it's not confusing.

I've been confused by it multiple times over the years, and I wrote the 
spec. I am confident in calling it confusing.


> For instance:
> 
> ctx.rect(0,0,10,10);
> ctx.scale(2,2); <- should not affect geometry of the previous rect
> ctx.stroke(); <- linewidth is scaled by 2, but rect is still 10x10

It's confusing because it's not at all clear why this doesn't result in 
two rectangles of different sizes:

 ctx.rect(0,0,10,10);
 ctx.scale(2,2);
 ctx.stroke();
 ctx.scale(2,2);
 ctx.stroke();

...while this does:

 ctx.rect(0,0,10,10);
 ctx.scale(2,2);
 ctx.stroke();
 ctx.beginPath();
 ctx.rect(0,0,10,10);
 ctx.scale(2,2);
 ctx.stroke();

It appears to be the same path in both cases, after all.


> > > > So this is not how most implementations currently have it defined.
> > >
> > > I'm unsure what you mean. Browser implementations? If so, they 
> > > definitely do store the path in user coordinates. The spec currently 
> > > says otherwise [1] though.
> >
> > I'm not sure what you're referring to here.
> 
> All graphics backends for canvas that I can inspect, don't apply the CTM 
> to the current path when you call a painting operator. Instead, the path 
> is passed as segments in the current CTM and the graphics library will 
> apply the transform to the segments.

Right. That's what the spec says too, for the current default path. This 
is the confusing behaviour to which I was referring. The "Path" API (or 
Path2D or whatever we call it) doesn't have this problem.


> > > Another use case is to allow authors to quickly migrate to hit regions.
> > >
> > > ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill();
> > > ... // lots of complex drawing operation for a control
> > > ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke();
> > >
> > >
> > > To migrate that to a region (with my proposed shape interface [1]):
> > >
> > > var s = new Shape();
> > >
> > > ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill(); s.add(new
> > > Shape(ctx.currentPath));
> > > ...
> > > ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke(); s.add(new
> > > Shape(ctx.currentPath, ctx.currentDrawingStyle));
> > >
> > > ctx.addHitRegion({shape: s, id: "control"});
> >
> > Why not just add ctx.addHitRegion() calls after the fill and stroke calls?
> 
> That does not work as the second addHitRegion will remove the control and
> id from the first one.
> The 'add' operation is needed to get a union of the region shapes.

Just use two different IDs with two different addHitRegion() calls. That's 
a lot less complicated than having a whole new API.


> > On Fri, 6 Dec 2013, Jürg Lehni wrote:
> > >
> > > Instead of using getCurrentPath and setCurrentPath methods as a 
> > > solution, this could perhaps be solved by returning the internal 
> > > path instead of a copy, but with a flag that would prevent further 
> > > alterations on it.
> > >
> > > The setter of the currentPath accessor / data member could then make 
> > > the copy instead when a new path is to be set.
> > >
> > > This would also make sense from a a caching point of view, where 
> > > storing the currentPath for caching might not actually mean that it 
> > > will be used again in the future (e.g. because the path's geometry 
> > > changes completely on each frame of an animation), so copying only 
> > > when setting would postpone the actual work of having to make the 
> > > copy, and would help memory consummation and performance.
> >
> > I don't really understand the use case here.
> 
> Jurg was just talking about an optimization (so you don't have to make 
> an internal copy)

Sure, but that doesn't answer the question of what the use case is.


On Wed, 12 Mar 2014, Rik Cabanier wrote:
> > > >
> > > > You can do unions and so forth with just paths, no need for 
> > > > regions.
> > >
> > > How would you do a union with paths? If you mean that you can just 
> > > aggregate the segments, sure but that doesn't seem very useful.
> >
> > You say, here are some paths, here are some fill rules, here are some 
> > operations you should perform, now give me back a path that describes 
> > the result given a particular fill rule.
> 
> I think you're collapsing a couple of different concepts here:
> 
> path + fillrule -> shape
> union of shapes -> shape
> shape can be converted to a path

I'm saying "shape" is an unnecessary primitive. You can do it all with 
paths.

   union of (path + fillrule)s -> path


> > A shape is just a path with a fill rule, essentially.
> 
> So, a path can now have a fillrule? Sorry, that makes no sense.

I'm saying a shape is just the combination of a fill rule and a path. The 
path is just a path, the fill rule is just a fill rule.


> > Anything you can do
> > with one you can do with the other.
> 
> You can't add segments from one shape to another as shapes represent
> regions.
> Likewise, you can't union, intersect or xor path segments.

But you can union, intersect, or xor lists of pairs of paths and 
fillrules.


> > > > > The path object should represent the path in the graphics state. 
> > > > > You can't add a stroked path or text outline to the graphics 
> > > > > state and then fill/stroke it.
> > > >
> > > > Why not?
> > >
> > > As designed today, you could fill it, as long as you use non-zero 
> > > winding. If you use even-odd, the results will be very wrong. (ie 
> > > where joins and line segments meet, there will be white regions)
> >
> > I think "wrong" here implies a value judgement that's unwarranted.
> 
> "Wrong" meaning:
> if the author has a bunch of geometry and wants to put it in 1 path object
> so he can just execute 1 fill operation, he might be under the impression
> that "adding" the geometry will just work.

Well, sure, an author might be under any number of false impressions.

The API has a way for a bunch of paths to be merged with a single fillrule 
to generate a new path with no crossing subpaths (which is also fillrule 
agnostic), essentially giving you the union of the shapes represented by 
those paths interpreted with that fillrule.


> There are very few use cases where you want to add partial path segments
> together but I agree that there are some cases that it's useful to have.

I disagree that there few such cases. Pretty much any time you create a 
path, you are adding partial path segments together. Whether you do so 
using one Path object all at once or multiple Path objects that you later 
add together is just a matter of programming style.


> > > Stroking will be completely wrong too, because joins and end caps 
> > > are drawn separately, so they would be stroked as separate paths. 
> > > This will not give you the effect of a double-stroked path.
> >
> > I don't understand why you think joins and end caps are drawn 
> > separately. That is not what the spec requires.
> 
> Sure it does, for instance from 
> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#trace-a-path 
> :
> 
> The round value means that a filled arc connecting the two 
> aforementioned corners of the join, abutting (and not overlapping) the 
> aforementioned triangle, with the diameter equal to the line width and 
> the origin at the point of the join, must be added at joins.
> 
> If you mean, "drawn with a separate fill call", yes that is true.
> What I meant was that they are drawn as a separate closed path that will
> interact with other paths as soon as there are different winding rules or
> "holes".

The word "filled" is a bit misleading here (I've removed it), but I don't 
see why that led you to the conclusion you reached. The step in question 
begins with "Create a new path that describes the edge of the areas that 
would be covered if a straight line of length equal to the styles 
lineWidth was swept along each path in path while being kept at an angle 
such that the line is orthogonal to the path being swept, replacing each 
point with the end cap necessary to satisfy the styles lineCap attribute 
as described previously and elaborated below, and replacing each join with 
the join necessary to satisfy the styles lineJoin type, as defined below", 
which seems pretty unambiguous.


> > > > We seem to be going around in circles. We're in agreement that 
> > > > eventually we should add APIs for combining paths such that we get 
> > > > the equivalent of the union of their fill regions. I agree that 
> > > > converting text into paths is non-trivial (lots of stuff browsers 
> > > > do is non-trivial, that's kind of the point -- if it was trivial, 
> > > > we could leave it for authors). But I don't see how we get from 
> > > > there to you wanting the existing APIs removed.
> > >
> > > I want them removed because they will most likely not behave in the 
> > > way that an author expects. When he "adds" 2 paths, he wouldn't 
> > > expect that there is 'interference' between them.
> >
> > I don't see why not. It's exactly what happens today if you were to 
> > just add the same path creation statements together into the current 
> > default path and fill or stroke that.
> 
> Sure but who does that?

It's how all paths are built, as far as I can tell. I don't see how else 
you could build a path that consists of more than one line.

addPath() is useful for shifting a path according to a transform.
addPathByStrokingPath() is for creating a stroked path.
addText() is for writing text.

I don't see how removing any of them is a win.


> > > > On Mon, 4 Nov 2013, Rik Cabanier wrote:
> > > > >
> > > > > However, for your example, I'm unsure what the right solution 
> > > > > is. The canvas specification is silent on what the behavior is 
> > > > > for non-invertible matrices.
> > > >
> > > > What question do you think the spec doesn't answer?
> > > >
> > > > > I think setting scale(0,0) or another matrix operation that is 
> > > > > not reversible, should remove drawing operations from the state 
> > > > > because: - how would you stroke with such a matrix?
> > > >
> > > > You'd get a point.
> > >
> > > How would you get a point? the width is scaled to 0.
> >
> > That's how you get a point -- scale(0,0) essentially reverts 
> > everything to a zero dimensional point.
> 
> OK, but the width of the point is also transformed to 0 so you get 
> nothing.

Points are always zero-width, by definition.


> We've gone over this several times now.
> The APIs that you define, have use cases and I agree with them.
> However the way you defined those APIs does not make sense and will not
> give the result that authors want.

The way to make this point would be to start from the use case, describe 
the desired effect, show the "obvious" way to achieve this using the API, 
and then demonstrate how it doesn't match the desired effect.


> What you specified there is called "planarization". This is when you
> calculate the intersections within and between closed shapes and remove the
> line segments that are filled on both sides.
> By specifying this:
> 
> The subpaths in merged path must be oriented such that for any point, the
> number of times a half-infinite straight line drawn from that point crosses
> a subpath is even if and only if the number of times a half-infinite
> straight line drawn from that same point crosses a subpath going in one
> direction is equal to the number of times it crosses a subpath going in the
> other direction.
> 
> and relying on segment removal, you also get the same fill behavior for
> even-odd. (Meaning that the end result can be used with either winding rule)
> This is not something that is needed for just text but also when you do a
> union of shapes.
> 
> The bad news is that this algorithm is very expensive and there are few
> libraries that do a decent job (I only know of 1).
> So, it's not realistic to add this to the Path2D object.

I don't really see why it's unrealistic. In most cases, the user agent 
doesn't actually have to do any work -- e.g. if all that you're doing is 
merging two paths so that you can fill them simultaneously later, the UA 
can just keep the two paths as is and, when necessary, fill them.

For cases where you really want to have this effect -- e.g. when you want 
to get the outline of the dashed outline of text -- then I don't really 
see any way to work around it.


> The reason for that is that even though a UA could emulate the union by 
> doing multiple fill operations, Path2D allows you to stroke another path 
> object. At that point, you really have to do planarization. By defining 
> a Shape2D object and not allowing it to be stroked, we can work around 
> this.

Sure, by limiting the feature set dramatically we can avoid the cases 
where you have to do the hard work, but we also lose a bunch of features.


> > I don't think the arguments for removing these are compelling. The 
> > problems with the APIs have been addressed (e.g. there's no ambiguity 
> > in the case of overlapping text), the use cases are clear (e.g. 
> > drawing text around an arc or drawing a label along a line graph's 
> > line), and the API now supports the constructs to do unions of fill 
> > regions.
> 
> Where is the union of fill regions specified? All I see is segments 
> aggregation.

One of the Path constructors takes an array of paths and a fill rule.


> > > No one has implemented them and they are confusing the browser 
> > > vendors.
> >
> > I don't think they're confusing anyone.
> 
> The blink people were looking at adding this until they thought it 
> through and realized that it wouldn't work.

Realised what wouldn't work? As far as I'm aware, there's nothing that 
wouldn't work.


> > > Until we have support for shapes, the output of these methods won't 
> > > be stable.
> >
> > These methods have been very stable. They have barely changed since 
> > they were added, except for some minor tweaks to fix bugs.
> 
> How can you make that statement? No one has implemented them yet.

What do you mean by "stable"?

I assumed you meant "hasn't been changing a lot". The spec hasn't been 
changing a lot, so it seems pretty stable.


On Fri, 14 Mar 2014, Justin Novosad wrote:
> On Fri, Mar 14, 2014 at 2:29 PM, Ian Hickson <ian@hixie.ch> wrote:
> >
> > If the bug is that Chrome resamples the image in an ugly way, then 
> > that's a bug with Chrome. As the bug says, browsers are allowed to 
> > pick whatever algorithm they want -- it's a quality-of-implementation 
> > issue. But if the result is ugly, that's a low quality implementation.
> 
> Yes, and if we fixed it to make it prettier, people would complain about 
> a performance regression. It is impossible to make everyone happy right 
> now. Would be nice to have some kind of speed versus quality hint.

The problem with a hint is that it will be set incorrectly, and so instead 
of having something that's mostly pretty but mostly fast for everyone, 
you'd end up with something that's slow on sites that need things to be 
fast, and things that are ugly on sites that need things to be pretty.

In general I think it is very unwise for us to design APIs with hints that 
have subtle effects on developer machines but that can cripple performance 
on low-end devices.

Instead, we should use adaptive algorithms, for example always using the 
prettiest algorithms unless we find that frame rate is suffering, and then 
stepping down to faster algorithms.


On Wed, 26 Mar 2014, K. Gadd wrote:
>
> As I mentioned to Ryosuke off-list, I think the 
> interpolateEndpointsCleanly attribute is a (relatively) simple solution 
> to the problem I have with the current spec, and it doesn't 
> overcomplicate things or make it hard to improve filtering in the 
> future. It's also trivial to feature-detect, which means I can use it 
> when available and fallback to a temporary canvas otherwise. I think 
> providing this option would also make it easier to solve situations 
> where applications rely on the getImageData output after rendering a 
> scaled bitmap.
> 
> I'd probably call it something (to me) clearer about semantics, though, 
> like 'sampleInsideRectangle'

Here you are suggesting a feature that would override the requirement in 
the spec that reads "When the filtering algorithm requires a pixel value 
from outside the source rectangle but inside the original image data, then 
the value from the original image data must be used", right? What would 
you replace it with, exactly? Transparent black? The value from the 
nearest edge pixel inside the rectangle?

Can you elaborate on the use case?


On Fri, 14 Mar 2014, Rik Cabanier wrote:
> On Fri, Mar 14, 2014 at 11:09 AM, Ian Hickson <ian@hixie.ch> wrote:
> > On Wed, 4 Dec 2013, Jürg Lehni wrote:
> > >
> > > Implementing [layering/grouping] would help us greatly to optimize 
> > > aspects of Paper.js, as double buffering into separate canvases is 
> > > very slow and costly.
> >
> > Can you elaborate on what precisely the performance bottleneck is? I 
> > was looking through this thread but I can't find a description of the 
> > use cases it addresses, so it's hard to evaluate the proposals.
> 
> Let's say you're drawing a scene and there is a bunch of artwork that 
> you want to apply a multiply effect or opacity to. With today's code, it 
> would look something like this:
> 
> var bigcanvas = document.getElementById("c");
> var ctx = bigcanvas.getContext("2d");
> ctx.moveto();.... // drawing underlying scene
> 
> ctx.globalAlpha(.5);
> var c = document.createElement("canvas");
> ctx2 = c.getContext("2d");
> ctx2.moveto();.... // drawing scene that needs the effect
> ctx.drawImage(c, 0, 0);
> 
> With layers, it would become:
> 
> var bigcanvas = document.getElementById("c");
> var ctx = bigcanvas.getContext("2d");
> ctx.moveto();.... // drawing underlying scene
> 
> ctx.globalAlpha(.5);
> ctx.beginLayer();
> ctx.moveto();.... // drawing scene that needs the effect
> ctx.endLayer();
> 
> So, with layers you
> - avoid creating (expensive) DOM elements

Not really though, right? I mean, the user agent still has to create the 
intermediate bitmap to apply the effects to.


> - simplify the drawing (especially when there's a transformation)

Why would it be simpler?

There's a bug tracking this feature currently:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=22704


On Fri, 14 Mar 2014, Rik Cabanier wrote:
> 
> Path2D has now landed in Blink [1]. Blink also implemented the 'addPath'
> method.
> WebKit just landed a patch to rename Path to Path2D, remove currentPath and
> add fill/stroke/clip with a path [2].
> A patch is under review for Firefox to add Path2D.
> 
> Given this, can we change the spec to reflect the new name?
> 
> 1: https://codereview.chromium.org/178673002/
> 2: https://webkit.org/b/130236
> 3: https://bugzilla.mozilla.org/show_bug.cgi?id=830734

Done.


On Tue, 18 Mar 2014, Jürg Lehni wrote:
>
> So is currentPath going away then for sure? Will there still be a way to 
> to retrieve a Path2D representation of the path being drawn by the long 
> existing drawing commands on the context?
> 
> I quite liked how I could use it for caching, in case the browser 
> supported that feature, and check wether I have a cached path the next 
> time I need to draw it, falling back on redrawing it using the same 
> drawing commands. Doing the same by feature-detecting the Path(2D) 
> constructor and building separate drawing approaches based on its 
> existence results in much more complicated code.

Why is it so complicated? Here's an example of how you could do it, 
assuming you wanted to cache certain paths

   function myCircle(p) {
     p.moveTo(0,0);
     // etc...
   }

   function fillPath(c, callback) {
     if (window.Path2D) {
       var p;
       if (pathIsAlreadyCached(callback.name)) {
         p = getCachedPath(callback.name);
       } else {
         p = new Path2D();
         callback(p);
         saveCachedPath(p, callback.name);
       }
       c.fill(p);
     } else {
       c.beginPath();
       callback(c);
       c.fill();
     }
   }

   fillPath(c, myCircle);


On Tue, 18 Mar 2014, Dirk Schulze wrote:
> 
> I am still in favor for a setter and getter as well. It had the benefit 
> that you were able to transform the context and it affected drawing 
> commands as well. It is more complicated to create a second Path2D 
> object and add it to a previous path with a transform.

I'm not sure I understand what you mean. The interactions of transforms 
and path building on the 2D context was one of the biggest problems that 
the Path objects are intended to side-step.


On Fri, 14 Mar 2014, Rik Cabanier wrote:
> >
> > Event retargetting now explicitly applies to "the control represented 
> > by the region", which is always null if the given control is now a 
> > text field.
> 
> Does this change the eventTarget attribute on the event object [1].

It changes the "target" attribute, if that's what you mean. (See step 3 of 
the dispatch algorithm in DOM.)


> > > More generally, is this event rerouting supposed to be able to 
> > > trigger browser default event handling behavior, or only DOM event 
> > > dispatch?
> >
> > As it was specified, I don't see how it could trigger default actions 
> > of anything other than the canvas and its ancestors. The canvas hook 
> > ran in the middle of the "When a pointing device is clicked, the user 
> > agent must run these steps" algorithm, which refers to the origin 
> > target, not the rerouted target.
> >
> > I've now changed this so that it does in fact trigger the default 
> > action if applicable.
> 
> This will still just reroute events, right?

Not sure what you mean by "just".


> For instance, if the fallback element is a <a href="...">, will clicking 
> on the region cause the browser to follow the hyperlink?

Yes.


> > On Wed, 5 Mar 2014, Robert O'Callahan wrote:
> > >
> > > The problem is that if these retargeted events can trigger default 
> > > browser behavior, the browser has to be able to compute the position 
> > > of the event relative to the new target DOM node, and it's not clear 
> > > how to do that.
> >
> > I've made it explicit that the elements that can get clicks targetted 
> > to them only include elements that don't have subregions. In 
> > particular, image maps and image buttons are excluded.
> 
> Thanks for updating the spec. It's getting quite complex though :-( 
> Maybe it's simpler to just add the id to the event and leave the canvas 
> element as the target? Since this is not a major feature, the complexity 
> might stop implementors.

I don't understand what part you think is complicated here. Can you 
elaborate?


> > > Currently, the specification states that if you create a region and 
> > > then create another region that completely covers region, the first 
> > > region is removed from the hit region list [1]
> > >
> > > This is a complex operation that involves either drawing the regions 
> > > to a bitmaps and counting pixels, or path intersection.
> >
> > There's two trivial ways to implement this, depending on whether the 
> > hit regions are backed by a bitmap (the simplest and fastest solution 
> > but uses a lot of memory) or a region list (slower, but much more 
> > memory efficient). In the case of a bitmap, you just draw on the new 
> > region, and the old region is no longer in the bitmap, so it's 
> > trivially gone.
> > 
> > In the case of a list, you put the new region ahead of the old region 
> > so that you never actually get around to checking the old region.
> 
> The following step still needs to run though: 
> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#garbage-collect-the-regions
> 
> Let victim be the first hit region in list to have an empty set of 
> pixels and a zero child count, if any.
>
> If this was implemented with a bitmap, the only way to figure this out 
> is to walk the individual pixels (= expensive).

This is garbage collection, it doesn't have to run often. When it _is_ 
run, it's actually pretty fast -- in the bitmap case, for example, you 
just need to walk the bitmap and for every pixel with a defined region 
mark the region as non-empty. This is O(N) with the size of the bitmap, 
but with a _very_ low constant factor (lower than recolourising a bitmap, 
say, which is something we expect authors to do in JS).


> We should also not forget that a11y needs the bounding box of the hit 
> region which also means constantly walking of the pixels.

Not "constantly". This kind of thing is trivially cacheable. It's also 
relatively simple to just check the pixels you're about to overwrite and 
note those regions as needing updating; when you _do_ update them, they 
can only have gotten smaller so you can just walk the edge of the bounding 
rectangle until you hit a pixel on each side. Plus, the bounding box 
doesn't have to be updated often -- the user isn't going to be jumping to 
the area every ten milliseconds or anything, and even if the user did, 
using a slightly out-of-date bounding box is fine (it'll just be bigger 
than strictly necessary).


> > > It is also unintuitive because an author might expect that he could 
> > > remove the second region and still have the first region active.
> >
> > That would be inconsistent with how canvas works. Canvas is an 
> > immediate-mode API. If you draw on the canvas, and then clear it, you 
> > don't get back what was there before.
> 
> What if an author doesn't clear it but just calls fillRect or is smart 
> and just invalidates/redraws portions of the canvas? Hit regions should 
> keep state, regardless of the canvas pixels.

Those would just work, as far as I can tell. Do you have a concrete 
example?


> > > > http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2012-July/thread.html#36556
> > >
> > > It looks like that thread never came to a conclusion.
> >
> > Is there anything specifically that was raised in that thread that you 
> > think hasn't been responded to?
> 
> Well, you had the last response but I don't think it came to a 
> conclusion :-)

If people don't respond to my requests for clarification or if they don't 
disagree with the last thing I say, then that's a conclusion...


> At the time, I was under the impression that we could mimic it with 
> paths but when I read the spec closer, the step that removes the region 
> pixels is too complex to implement and unintuitive for authors [2]

As discussed earlier in this thread, it's easy to implement this using a 
list of paths, you just draw the cleared pixels as a new region on the top 
of your list.

I don't see what's unintuitive here.


> > > The arguments against using a bitmap presentation still stand: - it 
> > > will be too expensive to keep an actual copy of the canvas pixels in 
> > > memory to do hit testing
> >
> > It's actually pretty common to do exactly this. Note that you don't 
> > necessarily need a bitmap that has the same bit depth or pixel density 
> > as the visible bitmap.
> 
> Where else does this happen?

A Google search for "hit test bitmap" shows some examples, but 
unfortunately it's a hard topic to search for -- I kept running into 
examples of people trying to do collision detection with bitmaps instead!


> Creating a canvas bitmap for constant reading will also be extremely 
> costly since so many implementations run canvas operations on the GPU. 
> I'm unsure if anyone supports a 8 bit backbuffer so at best, the hit 
> region bitmap is half the size. This is too expensive.

Don't forget that this doesn't have to be a very high fidelity bitmap. It 
doesn't get any anti-aliasing, it has no alpha transparency, there's no 
bitmap drawing, it doesn't necessarily have to be full-density, either 
(it's unlikely that authors are going to have hit regions that are half a 
CSS pixel high, for instance). It's quite plausible to do all the work for 
this bitmap in software on the CPU, as far as I can tell.

You can also use a hybrid approach, with the most recent regions added to 
a list, and the list regularly compressed down to a bitmap when the CPU 
load is low, or on a separate thread, such that the bitmap generation cost 
is a non-issue, while still not having to pay the cost of having all the 
regions in a long list, and while having the GC computations described 
earlier be reasonably straight-forward to compute (modulo inter-thread 
communication, which is always fun).


> > > - if you have to mimic the behavior with paths, you need access to 
> > > expensive/difficult path manipulation algorithms
> >
> > The maths for determining if a region is contained in another region 
> > is pretty well understood at this point, as far as I can tell.
> 
> It's still a hard problem. Looking at Firefox' and Apple's 
> implementation, I don't know how they could determine if a path is 
> contained within another path. Google has a library in development. We 
> looked at it 6 months ago and it had many issues.

Our job is to implement these hard problems so that authors don't have to.


> > > Hit regions should be redesigned so they work on the path geometry 
> > > as opposed to pixels. We already have the necessary code to do hit 
> > > testing on paths (see isPointInPath)
> >
> > isPointInPath() works on pixels just like hit regions works on pixels.
> 
> No, this is not how it's implemented.
> WebKit, Blink and Firefox use the geometry of the path. They don't use
> pixels.

That's an implementation detail. My point is that to an author, it's all 
pixels. The API takes in two coordinates, and gives you a boolean result.


> > On Wed, 12 Mar 2014, Dirk Schulze wrote:
> > >
> > > In SVG we tried to avoid having hit testing based on pixel values 
> > > obviously for performance reasons.
> >
> > SVG is a retained-mode API, so naturally it has a different model.
> 
> I don't see why. A browser draws the SVG DOM to a screen bitmap and then 
> does hit testing with fine paths. Canvas could do the exact same thing.

If Canvas and SVG do "the exact same thing", then we should drop one.

The whole point of <canvas> is to be an immediate-mode API that 
complements SVG.


> People think of this API as populating a hit region OM. Why not go this 
> route?

What people?


> > > clearRect is currently defined as a subtraction mechanism for hit 
> > > regions [1].
> > >
> > > It's unlikely that a UA will implement hit regions using pixels so 
> > > this would have to be done using path subtraction which is 
> > > expensive.
> >
> > I'm not sure why you think it's expensive. It's trivial to just add a 
> > rectangle to the front of the list of regions.
> 
> That is true. This does mean that if there are a lot of small clearRect 
> calls, the list of regions could become very long.

Sure. Same with if there are a lot of rectanglular hit regions added.

The model above, where you start off with a list, but regularly flatten it 
to a bitmap, would lower the cost of maintaining such a list.


> > > Why was this special behavior added to clearRect?
> >
> > Because it's the most obvious mechanism for authors. You clear a part 
> > of the canvas, naturally that part of the canvas no longer has 
> > regions.
> 
> Why is that naturally?

Because nothing is rendered there any more.


> So, if an author clears an area there are no more regions in it, but if 
> he draws over it, they are still there?

Right. Same as with regular drawing. If you clear it, it's gone. If you 
draw on top of it, it contributes (e.g. in the colour of anti-aliased 
lines, showing through where the content on top is transparent, etc).


> Clipping also doesn't affect regions.

Hm, good point. Fixed.


> > On Tue, 4 Mar 2014, Rik Cabanier wrote:
> > >
> > > The spec implies--
> >
> > The spec doesn't imply anything. It either requires something, or 
> > doesn't. If you ever find yourself reading between the lines, then 
> > there is either a spec bug, or you are reading something that the spec 
> > doesn't require.
> 
> I know that. So, if I write "the spec implies", you can assume that I 
> believe that the spec is incomplete.

I would rather you just said "the spec doesn't say whether..." rather than 
"the spec implies", since the latter has a very different meaning.


On Sat, 15 Mar 2014, Dirk Schulze wrote:
> 
> I would suggest a filter attribute that takes a list of filter 
> operations similar to the CSS Image filter function. Similar to 
> shadows[2], each drawing operation would be filtered. The API looks like 
> this:
> 
> partial interface CanvasRenderingContext2D {
>     attribute DOMString filter;
> }
> 
> A filter DOMString could looks like: “contrast(50%) blur(3px)”
> 
> With the combination of grouping in canvas it would be possible to 
> group drawing operations and filter them together.
> 
> Filter functions include a reference to a <filter> element and a 
> specification of SVG filters. I am unsure if a reference do an element 
> within a document can cause problems. If it does, we would just not 
> support SVG filter references.

I've filed a bug to track this:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=25243


On Mon, 17 Mar 2014, Justin Novosad wrote:
> 
> Hmmm, I gave this a bit more thought...  To apply the construction 
> algorithm in transformed space, the ellipse parameters (radiusX, 
> radiusY, rotation) would have to be transformed. Transforming the 
> parameters would be intractable under a projective transform (e.g. 
> perspective), but since we are limitted to affine transforms, it can be 
> done.  Now, in the case of a non-invertible CTM, we would end up with 
> radiusX or radiusY or both equal to zero.  And what happens when you 
> have that?  Your arcTo just turned into lineTo(x1, y1). Tada!

On Mon, 17 Mar 2014, Dirk Schulze wrote:
> 
> Why does radiusX or radiusY need to be zero? Because you define it that 
> way for a non-invertible matrix? That makes sense for scale(0,0). What 
> about infinity or NaN? If Ian didn’t update the spec then this is still 
> undefined and therefore up to the UA to decide.

How can it be infinity or NaN? (Recall that except where otherwise 
specified, for the 2D context interface, any method call with a numeric 
argument whose value is infinite or a NaN value must be ignored.)


On Mon, 17 Mar 2014, Rik Cabanier wrote:
> 
> I'm unsure if anyone has shipped that part of the spec. There's 
> certainly no interop...

I am loathe to keep changing this kind of thing. We settled this part of 
the spec years ago. Let's not go back now. If we keep changing things like 
this, people will (rightly) complain that they can't trust the spec.


> So, what we could say is:
> - when drawing paths, ignore all calls if the matrix is non-invertible
> (WebKit and Blink do this)
> - when filling/stroking/clipping, ignore all calls if the matrix is
> non-invertible (Firefox, WebKit and Blink do this)

As far as I can tell, this is unnecessary.


On Mon, 17 Mar 2014, Justin Novosad wrote:
> 
> Yes, but there is still an issue that causes problems in Blink/WebKit:
> because the canvas rendering context stores its path in local
> (untransformed) space, whenever the CTM changes, the path needs to be
> transformed to follow the new local spcae.  This transform requires the CTM
> to be invertible. So now webkit and blink have a bug that causes all
> previously recorded parts of the current path to be discarded when the CTM
> becomes non-invertible (even if it is only temporarily non-invertible, even
> if the current path is not even touched while the matrix is
> non-invertible). I have a fix in flight that fixes that problem in Blink by
> storing the current path in transformed coordinates instead. I've had the
> fix on the back burner pending the outcome of this thread.

Indeed. It's possible to pick implementation strategies that just can't be 
compliant; we shouldn't change the spec every time any implementor happens 
to make that kind of mistake, IMHO.

(Of course the better long-term solution here is the Path objects, which 
are transform-agnostic during building.)


Just to be clear, we should support this because otherwise the results are 
just wrong. For example, here some browsers currently show a straight line 
in the default state, and this causes the animation to look ugly in the 
transition from the first frame to the secord frame (hover over the yellow 
to begin the transition):

   http://junkyard.damowmow.com/538

Contrast this to the equivalent code with the transforms explicitly 
multiplied into the coordinates:

   http://junkyard.damowmow.com/539

I don't see why we would want these to be different. From the author's 
perspective, they're identical.


On Thu, 20 Mar 2014, Rik Cabanier wrote:
> 
> It would be great if we could get clarification on this.

I'm not sure what needs clarifying. The spec seems clear here.


> Firefox and IE are conformant per the spec when it comes to drawing paths
> but not fill/stroke/clip.

Can you elaborate on how Firefox doesn't match the spec for stroke and 
clip?

For fill, it does indeed seem to ignore the rule in the spec that says 
that fill solid colours are unaffected by the current transformation. It 
has a similar impact on renderings as the examples above:

   http://junkyard.damowmow.com/540


> Supporting this small edge case comes at a large cost in Firefox and 
> likely also IE.

Can you elaborate on this cost?


> Many APIs in canvas are running into this issue which results in lack of 
> interoperability.

As far as I can tell, the spec is unambiguous. Certainly it does appear 
that browsers haven't yet converged on what the spec says, but that isn't 
unusual; it takes time for browsers to converge, especially for edge 
cases like this where there's not much pressure (since authors tend to 
just work around the bugs).


On Wed, 19 Mar 2014, Dirk Schulze wrote:
> 
> I just looked at the definition of Path.addPath[1]:
> 
>    void addPath(Path path, SVGMatrix? transformation);
> 
> SVGMatrix is nullable but can not be omitted all together. Why isn’t it 
> optional as well? I think it should be optional [...]

That seems reasonable. Done.


On Wed, 19 Mar 2014, Rik Cabanier wrote:
>
> [context . currentTransform]
>
> As currently specified, this must return a live SVGMatrix object, 
> meaning that as you change the CTM on the 2d context, your reference to 
> the SVGMatrix should change as well.
> 
> It's unlikely that you actually want this...

Why?

See Chris' original proposal here:

   http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Mar/0269.html

I would be reluctant to change this to a different design without their 
input.


On Fri, 21 Mar 2014, Joe Gregorio wrote:
> On Wed, Mar 19, 2014 at 4:46 PM, Dirk Schulze <dschulze@adobe.com> wrote:
> >
> > I just looked at the definition of Path.addPath[1]:
> >
> >     void addPath(Path path, SVGMatrix? transformation);
> >
> > SVGMatrix is nullable but can not be omitted all together. Why isn’t it
> > optional as well? I think it should be optional, especially because
> > creating an SVGMatrix at the moment means writing:
> >
> >     var matrix = document.createElementNS('http://www.w3.org/2000/svg
> > ','svg').createSVGMatrix();
> 
> Agreed, that's painful, +1 for making it optional.

Just so we're clear, even when it wasn't optional, you didn't have to do 
any of that. You can just pass null.

(It's still not optional for some of the other methods where it's in the 
middle of the arguments and making it optional doesn't make much sense.)


On Sat, 22 Mar 2014, Dirk Schulze wrote:
> 
> Does some one think it would be necessary to make SVGMatrix nullable 
> (optional SVGMatrix?)? I think it would be superfluous.

It's needed for consistency with the other methods.


On Thu, 20 Mar 2014, Rik Cabanier wrote:
>
> addPath is currently defined on the Path2D object. [1]
> Is there a reason why it's not defined on CanvasPathMethods instead? 
> That way this method is available on the 2d contest so you can append a 
> path to the current graphics state.

What's the use case?


On Thu, 20 Mar 2014, Dirk Schulze wrote:
>
> I am supportive for this idea! I agree that this would solve one of the 
> reasons why I came up with currentPath for WebKit in the first place.

Can you elaborate on the reason for this?


On Thu, 20 Mar 2014, Justin Novosad wrote:
>
> This would apply the CTM to the incoming path, correct?  I am a little 
> bit concerned that this API could end up being used in ways that would 
> cancel some of the performance benefits (internal caching opportunities) 
> of Path2D objects.

Right, that's why it's not currently on CanvasPathMethods. The idea is to 
make a clean break from the world where the transforms affect the building 
of the path.


On Thu, 20 Mar 2014, Dirk Schulze wrote:
> 
> Where is the difference to fill(Path2D), stroke(Path2D) and 
> clip(Path2D)? The path will always need to be transformed to the CTM. 
> Graphic libraries usually do this already for you. The addPath() 
> proposal is not different to that.

The difference is that there, you only have one path with one transform, 
not different parts of the path built with different transforms.


On Thu, 20 Mar 2014, Justin Novosad wrote:
> 
> The recently added currentTransform attribute on 
> CanvasRenderingContext2D gives shared access to the rendering context's 
> transform. By "shared", I mean:
> 
> a) this code modifies the CTM:
> var matrix = context.currentTransform;
> matrix.a = 2;
> 
> b) In this code, the second line modifies matrix:
> var matrix = context.currentTransform;
> context.scale(2, 2);
> 
> This behavior is probably not what most developers would expect.

It's the behaviour that was requested by the pdf.js developers. :-)


On Thu, 20 Mar 2014, Simon Sarris wrote:
> 
> FF (at least Aurora/Nightlies) has for some time had mozCurrentTransform 
> (and mozCurrentTransformInverse), which return an Array (so not 
> spec-compliant, since spec wants SVGMatrix). It is not shared, so it 
> does not do what your a) and b) examples do.
> 
> I agree that changing it to a getter method would be better, it would be 
> more intuitive and clear for developers.

On Mon, 24 Mar 2014, Hwang, Dongseong wrote:
>
> Looking over this thread, we make a consensus not to expose 
> currentTransform attribute.

Consensus is not how we decide things in the WHATWG. It's based on the 
strength of arguments.

So far, it seems the arguments either way are about equal. On the one hand 
you have a developer asking for the current API. On the other hand you 
have implementors saying that the current API is bad for developers.

What we need to make more progress, I think, is more concrete arguments, 
for example sample code that uses both APIs so we can see how authors 
would experience the two APIs in the real world.


On Mon, 24 Mar 2014, Simon Sarris wrote:
> 
> I think using "Current" in the naming convention is silly. The transform 
> just as much a part of state as lineWidth/etc, but nobody would propose 
> naming lineWidth something like currentLineWidth! There's no way to get 
> a *non-current* transformation matrix (or lineWidth), so I think the 
> distinction is unnecessary.
> 
> CTM only seems like a good idea if we're worried that the name is too 
> long, but since "Current" is redundant/extraneous, I don't think an 
> initialism is worth the added layer of confusion.

These are solid arguments if we agree that we should change the spec.


On Sun, 23 Mar 2014, Dirk Schulze wrote:
> 
> 1) I noticed that createImageData() is explicit that it represent a 
> transparent black rectangle. The constructor for ImageData is not that 
> explicit.

Fixed.


> 2) The last step of the 2nd constructor that takes an Uint8ClampedArray 
> says: " • Return a new ImageData object whose width is sw, whose height 
> is height, and whose data is source.”
> 
> Is data a reference to the original source or a copy of source? For the 
> former, there might be two ImageData objects referencing the same 
> ByteArray. How would that be useful?

The steps say that the data is the Uint8ClampedArray "source". I've added 
a note making the fact that it's not a copy more explicit.


On Mon, 24 Mar 2014, Jürg Lehni wrote:
>
> Non-scaling Strokes in Canvas.
>
> The request has come up multiple times on the paper.js mailing list [1], 
> and we will emulate this in JavaScript.
> 
> But since this will involve baking the CTM into the path to be drawn, 
> and setting the CTM to the identity matrix, I was wondering if this is 
> something worth supporting natively for the obvious reason of improved 
> performance?

You can do this with the new Path2D API, right?


On Tue, 25 Mar 2014, Dirk Schulze wrote:
> 
> [...] currentTransform [...]
> what should be returned for a CTM that is singular (not invertible)?

Even when the transform is not invertible, it's precise value is still 
well-defined, no?


> In WebKit we do not track all transformations of the CTM that caused a 
> singular matrix or are following a transformation that would have caused 
> a singular matrix.
> 
> Example:
> 
> ctx.scale(0,0);
> ct.translate(10,10);

The matrix should be 0,0,0,0,0,0 here.

It starts as

   1 0 0
   0 1 0
   0 0 1

Then you apply a scale transform to 0,0:

   1 0 0   0 0 0   0 0 0
   0 1 0 x 0 0 0 = 0 0 0
   0 0 1   0 0 1   0 0 1

Then you post-multiply that by the translation by 10,10:

   1 0 0   0 0 0   1 0 10   0 0 0
   0 1 0 x 0 0 0 x 0 1 10 = 0 0 0
   0 0 1   0 0 1   0 0  1   0 0 1

...and that's the matrix you should return.
  

> In webkit we would not apply the transformation scale(0,0) and mark the 
> CTM as not-invertible instead.

I do not believe this is an implementation strategy that can lead to a 
conforming implementation.


Note that this is the same as what you would get if you did:

   var a = new SVGMatrix(1,0,0,1,0,0);
           // assuming this interface gets a constructor, anyway
   a = a.scale(0,0);
   a = a.translate(10,10);

...so this is not unique to the canvas API.


On Tue, 25 Mar 2014, Justin Novosad wrote:
> 
> I prepared a code change to that effect, but then there was talk of 
> changing the spec to skip path primitives when the CTM is not 
> invertible, which I think is a good idea. It would avoid a lot of 
> needless hoop jumping on the implementation side for supporting weird 
> edge cases that have little practical usefulness.

I'm not sure I agree that they have little practical usefulness. Zeros 
often occur at the edges of transitions, and if we changed the spec then 
these transitions would require all the special-case code to go in author 
code instead of implementor code.


On Sun, 30 Mar 2014, Dirk Schulze wrote:
> 
> Canvas let you set alignment baselines with the textBaseline attribute 
> [1].
> 
> One of the baseline values is ‘middle’. The description of the ‘middle’ 
> baseline seems to be in conflict with the definition for the 
> alignment-baseline property in CSS[2].
> 
> Canvas: The middle of the em square
> CSS: [..] it may be computed using 1/2 the "x-height”
> 
> What Canvas uses as middle is described as ‘center’ in CSS. Is there a 
> way that we can change the naming and/or definition of ‘middle’ in 
> Canvas?

This seems like something that's too late to change. (I think using 
"middle" for what is between "top" and "bottom" makes eminent sense, 
though, so it doesn't seem like that big a problem.)


On Mon, 31 Mar 2014, Justin Novosad wrote:
>
> Wow, that is confusing. How can this be fixed without breaking existing 
> web content? Are browsers currently compliant with the canvas spec, or 
> do they implement the CSS definition of middle?

Looks like everyone does it per the canvas spec:

   http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=2924

The CSS 'vertical-align' property works per the CSS spec on the browsers 
I tested, too, FWIW:

   http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=2925

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Tuesday, 8 April 2014 16:51:30 UTC