- From: Justin Novosad <junov@google.com>
- Date: Fri, 30 May 2014 11:44:47 -0400
- To: Glenn Maynard <glenn@zewt.org>
- Cc: whatwg <whatwg@lists.whatwg.org>, Nils Dagsson Moskopp <nils@dieweltistgarnichtso.net>
Backtracking here. The "just do it in script" argument saddens me quite a bit. :-( I don't agree that it is okay to be in a state where web apps have to depend on script libraries that duplicate the functionality of existing Web APIs. I mean, we put a lot of effort into avoiding introducing non-orthogonal APIs in order to keep the platform lean. In that sense it is hypocritical to keep web APIs in a state that forces web developers to use scripts that are non-orthogonal to web APIs. The browser has a png encoder, and it is exposed in the API. So why should web developers be forced provide their own scripted codec implementation?! I understand that we should not add features to the Web platform that can be implemented efficiently in client-side code using existing APIs. But where do we draw the line? An extreme interpretation of that argument would be to stop adding any new features in CanvasRenderingContext2D because almost anything can be polyfilled on top of putImageData/getImageData with an efficient asm.js (or something else) implementation. In fact, why do we continue to implement any rendering features? Let's stop adding features to DOM and CSS, because we could just have JS libraries that dump pixels into canvases! Pwshhhhhh (mind blown) My point is, we need a proper litmus test for the "just do it in script" argument because, let's be honnest, a lot of new features being added to the Web platform could be scripted efficiently, and that does not necessarily make them bad features. Also, there are plenty of browser/OS/HW combinations for which it is unreasonable to expect a scripted implementation of a codec to rival the performance of a native implementation. For example, browsers are not required to support asm.js (which is kind of the point of it). More generally speaking, asm.js or any other script performance boosting technology, may not support the latest processing technology hotness that may be used in browser implementations (SIMD instructions that aren't mapped by the script compiler, CUDA, ASICs, PPUs, who knows...) -Justin On Thu, May 29, 2014 at 8:54 PM, Glenn Maynard <glenn@zewt.org> wrote: > On Thu, May 29, 2014 at 5:34 PM, Nils Dagsson Moskopp < > nils@dieweltistgarnichtso.net> wrote: > > > > and time it takes to compress. > > > > What benefit does it give then if the result is the same perceptually? > > > > Time it takes to compress. There's a big difference between waiting one > second for a quick save and 60 seconds for a high-compression final export. > > > On Thu, May 29, 2014 at 7:31 PM, Kornel Lesiński <kornel@geekhood.net> > wrote: > > > I don't think it's a no-brainer. There are several ways it could be > > interpreted: > > > > The API is a no-brainer. That doesn't mean it should be done carelessly. > That said, how it's implemented is an implementation detail, just like the > JPEG quality parameter, though it should probably be required to never use > lossy compression (strictly speaking this may not actually be required > today...). > > FYI, I don't plan to spend much time arguing for this feature. My main > issue is with the "just do it in script" argument. It would probably help > for people more strongly interested in this to show a comparison of > resulting file sizes and the relative amount of time it takes to compress > them. > > -- > Glenn Maynard >
Received on Friday, 30 May 2014 15:45:12 UTC