- From: Jonas Sicking <jonas@sicking.cc>
- Date: Wed, 28 Aug 2013 22:20:59 -0700
- To: Ryosuke Niwa <rniwa@apple.com>
- Cc: whatwg List <whatwg@whatwg.org>
Hi Ryosuke, Based on the feedback here, it doesn't sound like you are a huge fan of the original proposal in this thread. At this point, has any implementation come out in support of the proposal in this thread as a preferred solution over noexecute/execute()? The strongest support I've seen in this thread, though I very well could have missed some, is "it's better than status quo". Is that the case? / Jonas On Wed, Aug 28, 2013 at 7:43 PM, Ryosuke Niwa <rniwa@apple.com> wrote: > On Jul 13, 2013, at 5:55 AM, Andy Davies <dajdavies@gmail.com> wrote: > >> On 12 July 2013 01:25, Bruno Racineux <bruno@hexanet.net> wrote: >> >>> On browser preloading: >>> >>> There seems to an inherent conflict between 'indiscriminate' Pre-parsers/ >>> PreloadScanner and "responsive design" for mobile. Responsive designs >>> mostly implies that everything needed for a full screen desktop is >>> provided in markup to all devices. >>> >>> >> The pre-loader is a tradeoff, it's aiming to increase network utilisation >> by speculatively downloading resources it can discover. >> >> Some of the resources downloaded may be not be used but with good design >> and mobile first approaches hopefully this number can be minimised. >> >> Even if some unused resources get downloaded how much it matter? > > It matters a lot when you only have GSM wireless connection, and barely loading anything at all. > >> By starting the downloads earlier, connections will be opened sooner, and >> the TCP congestion window to grow sooner. Of course this has to be balanced >> against visitors who might be paying to download those unused bytes, and >> whether the unused resources are blocking something on the critical path >> from being downloaded (believe some preloaders can re-prioritise resources >> if they need them before the preloader has downloaded them) > > Exactly. I'd to make sure whatever API we come up gives enough flexibility for the UAs to decide whether a given resource needs to be loaded immediatley. > > > > On Jul 12, 2013, at 11:56 AM, Kyle Simpson <getify@gmail.com> wrote: > >> My scope (as it always has been) put simply: I want (for all the reasons here and before) to have a "silver bullet" in script loading, which lets me load any number of scripts in parallel, and to the extent that is reasonable, be fully in control of what order they run in, if at all, responding to conditions AS THE SCRIPTS EXECUTE, not merely as they might have existed at the time of initial request. I want such a facility because I want to continue to have LABjs be a best-in-class fully-capable script loader that sets the standard for best-practice on-demand script loading. > > > Because of the different network conditions and constraints various devices have, I'm wary of any solution that gives the full control over when each script is loaded. While I'm sure large corporations with lots of resources will get this right, I don't want to provide a preloading API that's hard to use for ordinary Web developers. > > > On Jul 15, 2013, at 7:55 AM, Kornel LesiĆski <kornel@geekhood.net> wrote: > >> There's a very high overlap between module dependencies and <script dependencies> proposal. I think at very least it would be useful to define <script dependencies> in terms of ES6 modules, or even abandon markup solution to avoid duplicating features. >> >> ES6 modules however do not solve the performance problem. In fact they would benefit from UA having a list of all dependencies up front (otherwise file's dependencies can only be discovered after that file is loaded, which costs as many RTTs as the height of the dependency tree). >> >> So I think that eventually ES6 modules + link[rel=subresource] could be the answer. The <link> would expose URLs to (pre)load for performance, but modules would handle actual loading/execution for flexibility and reliability. > > > Yes, we should definitely consider how this preloading API works with ES6 modules. > > > > On Jul 22, 2013, at 3:22 PM, Jonas Sicking <jonas@sicking.cc> wrote: > >> Having the load event anytime we are done with a network request also >> seems beneficial. Rather than having most APIs use "load" whereas this >> would use "preload". >> >> Generally speaking "load" means "loaded and processed". The >> 'noexecute' flag would change what the "and processed" piece includes. > > I don't think it'll be confusing if the script had noexecute. We can even call it noautoexecute if we wanted. > >> But I'm fine either way here. The same question and risk of confusion >> seems to exist with the "whenneeded" attribute. In general >> "whenneeded" seems very similar to "noexecute", but with a little bit >> more stuff done automatically, for better or worse. > > I like the simplicity of noexecute and excute(). However, I'm a little worried that it doesn't provide any information as to how important a given script is. So Web browsers have no choice but to request all scripts immediately. > > I'd like to eventually provide APIs that allow authors to codify which scripts are "vital" so that Web browsers can properly prioritize each script request. > > Implementation wise, noexecute/execute() will be extremely easy to implement in WebKit. > >> I.e. something like: >> >> <script src="script1.js" id="s1"> >> <script src="script2.js" dependencies="s1"> >> >> would run correctly in downlevel browsers, but would force the scripts >> to be blocking. >> >> <script src="script1.js" id="s1" async> >> <script src="script2.js" async dependencies="s1"> >> >> would give you performant non-blocking behavior in downlevel browsers, >> but at the expense of the scripts not always running in scripts in the >> right order. > > Use defer instead? > > - R. Niwa >
Received on Thursday, 29 August 2013 05:21:57 UTC