Re: [ScriptYielding] setImmediate clamping returns.

Le 13/08/2013 05:16, Kyle Simpson a écrit :
>> The content of the callback is irrelevant to this discussion.
> I think it's entirely relevant, which is why I keep bringing it up.
>
> If we are talking purely about the literal `(function f(){ setTimeout(f,0); })()` where there is nothing else but just a spinning callback that keeps calling itself over and over, and there's no other "contents" at all, then this is an absurd niche case.
I don't really care about what you call "the literal `(function f(){ 
setTimeout(f,0); })()`". When I write "(function f(){ setTimeout(f,0); 
})()", I'm referring to a family of problems, not just this expression.
In bug 123273, the reported code was:
function makeStatic() {
  if (ie4) {...}
   else if (ns6) {...}
   else if (ns4) {...}
  setTimeout("makeStatic()",0);
}
This code is part of the "(function f(){ setTimeout(f,0); })()" family 
of problems I want to see solved for setImmediate

The very expression "(function f(){ setTimeout(f,0); })()" is an absurd 
niche case indeed, the family of "(function f(){ setTimeout(f,0); })()" 
problems isn't.
In this family, some bodies of f functions are long, some are shorts, 
but I want to solve the entire family; that's the reason I keep saying 
that the body is irrelevant.

> The JS engine should be smart enough to see that there's nothing else in the function except calling itself, and just remove that code altogether, or at the very least, extremely throttle only that timer loop.
Fair enough, this expression is a no brainer (though beware of global 
overriding).
But solving the family of "(function f(){ setTimeout(f,0); })()" 
problems boils down to solving the halting problem (maybe that's what 
Jonas Sicking meant, I'm realizing only now)

> *I* would say all of these should just be optimized out and have no operational effect in the browser at all:
>
> 1. function f() { setTimeout(f,0); }
> 2. function f() { setImmediate(f); }
> 3. function f() { requestAnimationFrame(f); }
>
> Does that answer your question?
No because I was referring to the family of problem, but reading back, I 
admit I wasn't clear on that aspect. Sorry about that.

> --------
>
> OTOH, "realism" says that there's always more contents in these tight loop callbacks besides the self-reschedule.
>
> I'm also operating under the assumption that the mechanics of actually doing the rescheduling of the loop, and the calling of the callback, all of that is relatively constant and somewhat the same "cost" in all browsers. If that's not true, and say Chrome has a much easier time of scheduling timers than FF, that's a detail I'm unaware of, and why I was asking before if there really is a reason why these tight timer loops would inequitably disfavor some specific browser.
>
> If the timer self-rescheduling is a moot performance cost across all browsers, what else could cause one browser to suffer more harm than another with the same code, EXCEPT for the other contents of the callbacks?
>
> And if there *are* other contents of the callbacks, and timing is different, then the focus should be on those contents, not the timers.
>
> --------
>
> Now, to your other scenario, that some code in a "high profile site" might have equally bad performance in all browsers, but that the bug will only be filed against FF, and that this alone will be sufficient to put enough pressure on FF to act "badly", well, I just think that's poor governance/management.
I agree with you, but neither your opinion nor mine would be relevant in 
their decision-making process. Again, they wouldn't be making the 
decision out of free will, but constrained by their environment (we can 
discuss whether free will exists at all, but in another mailing-list 
maybe ;-) )

> If every browser performs roughly the same, it would only be the "politics" (which you eschewed) that could put more pressure on FF than the other browsers.
That's not politics, that's economics. I'm not eschewing, I'm reframing. 
The distinction is subtle, but really matters. Browser vendors are 
actors plunged in an environment trying to optimize metrics.
An example closer to web devs: polyfills aren't really an invention our 
of thin air. Web devs were also plunged in an environment. They try to 
maximize the "amount of people that can see my website" metrics and 
minimize the "amount of time I'll spend on my website" metrics. Coding 
to standards helps a lot optimizing the second metrics. Usually new 
standards also help making progress for the second metrics, so it makes 
sense to code against them as soon as possible. But the environment 
contains a significant amount of users using older versions without the 
new standard, so polyfills allow to code against newer standard while 
keeping optimizing the first metrics (which could be endangered when 
coding against new standards without polyfills).
I'm not here to debate whether polyfills are a good or bad things. Just 
saying that they came exist as a result of optimizing metrics given 
environmental constraints.

I wish I could share the pictures I have in my head of this environment 
instead of typing my ideas entropy bit by entropy bit (I read the 
English language each character contains about one bit of entropy) :-/

And like any Darwinian environment, you can't really ask a particular 
actor to make a decision that would go against its own existence.
That's not politics. Browser vendors don't make autonomous decisions, 
they react to an environment and act rationally in the direction of 
their survival.


>> But what sort of evil can browsers do? drop frames? They already do that. They actually happen to drop less even if rAF is misused than if it's not used at all. Please describe how browsers could do unspeakable evils to rAF, because I don't see it.
> If some rAF callback schedulings get dropped altogether, not just delayed by a few animation frames worth of time, and so code I asked to run just mysteriously was ignored, that would be potentially pretty "evil".
What is the incentive for browsers to do such a thing? A browser that 
does that breaks a website that works in other browsers. That'd be a 
footgun, so browsers won't do that.

> Especially if I was "abusing" rAF and doing other tasks (like remotely beaconing user actions or something) in that loop besides visual updates.
>
> But, I really don't think it's fruitful to postulate on what "evils" might be conceived.
That's the heart of the debate. The risk of the evil to setImmediate is 
to denature the feature entirely. There is no such risk for rAF.

> The point I was making is that rAF has some potential abuses, but just because it has potential abuses doesn't mean it was an ill-conceived feature. It has lots of good uses, and the cases where it could be abused are just part of playing that game (namely the blame sits squarely with the developer, not the browser).
There are always good and bad uses of a feature. The question is: what 
can/should browsers do when the bad cases happen?
As I said, rAF is so well designed that any attempt to denature it would 
be a footgun and browsers have strong incentives to let it as it is and 
improve something else (like their JS engine or DOM binding).
The situation is different for setImmediate. There is a temptation to 
fix it by denaturing it and browsers won't hesitate long. It won't 
happen, so that's an hypothetical bet, but I'm willing to bet Microsoft 
would be forced to denature the feature as well.

> I think the potential abuses of setImmediate() spinning on itself and chewing up CPU unnecessarily, and that this somehow creates a politically-inconvenient situation that "hurts" one browser… all that seems to me well in line with the risks we've already set lots of precedent on with other APIs (namely, rAF).
I disagree. Please reconsider on the fruitfulness of what evils might be 
conceived and show that the risk for a browser denaturing rAF is as big 
as the risk for setImmediate. Neither you nor anything I've read 
indicates that rAF is under such a threat.

> I don't see how setImmediate() creates some much bigger risk
The risk isn't necessarily bigger, it's of a different nature. The 
nature of the risk of setImmediate is being completely denatured making 
it effectively a duplicate of setTimeout 0. This risk doesn't exist for 
rAF. Or at least it's not been proven.

> that justifies the amount of push-back it's getting. And to be explicit, I don't buy the history of setTimeout as that sufficient evidence.
setImmediate is sold as a better setTimeout 0. Just based on that, it's 
natural to wonder whether what happened to setTimeout won't happen to 
setImmediate.

>> A bug will be filed on Firefox and the perf issue will be fixed quickly
> Or Firefox will somehow be "forced" to neuter rAF in some evil way because fixing the other performance issues is too complicated or will take too long and they can't take the "risk" of loss of browser share. I hope we don't see that day.
Please describe the evil that can happen to fix one rAF-misbehaving 
website while all well-behaving websites remain unbroken. I personally 
don't see it.
That's at the heart of how to compare the rAF and setImmediate cases.

David

Received on Tuesday, 13 August 2013 13:08:23 UTC