Extensible, performant Web

Hey,

I'm new at this group, joined after the Manifesto was published. I'd like
to start out by saying that I'm pretty excited about the leading principles
of enabling low level APIs and prototyping high level features in
JavaScript, which would enable to iterate on them, and revert bad design
decisions (which today is very hard).

# But...

At the same time, there are a few things that scare me about that model.
Problems that I believe are ignored in the myriad of blog posts and
presentations about this subject. I think it'd be best to tackle them
upfront, so I'll start by outlining them.

## Loading a lot of blocking JS

The approach of defining new platform capabilities in JS is extremely
powerful, and has been proven successful. The poster boy example of this
approach is the JQuery selector engine which was eventually standardized
(kind of) as QuerySelectorAll.

But when looking at that example, we should remember that for a long while
(and for some, still) using JQuery's selector engine meant loading JQuery
up front, before your application code can start running.

When we're talking about defining new HTML elements in JS, we need to load
these scripts before these elements can be usable.
This will have performance implications, and we need to do our best to
mitigate them upfront.

## Resource loading

Another high-profile example of the polyfill approach is picturefill,
polyfilling for the picture element. But when people say "There was a
responsive images problem, and then Web devs solved it with picturefill",
they ignore the performance implications incurred by using JS to load
resources
.
Currently polyfills suck at resource loading. When we're using polyfills to
download resources, we're hindering the preloader, and our resources start
loading later than they would be if they were decalred in markup. That is
true for HTTP/1.1, and significantly more so for SPDY & HTTP/2.0.

## NavigationController & First page load

When discussing polyfilling current required features that involve resource
loading (e.g. CSP, Responsive images), NavigationController comes up as a
candidate low-level API for that purpose.

The NavigationController was designed as a low level API for Offline Web
apps, and as such it is great. But it was not designed to answer the needs
of polyfilling other resource-loading features, and especially, it is not
designed to operate on the initial page load.

With ~20% of page views coming without the site's resources in the
cache[1], we cannot rely on NavigationController to protect users from XSS,
nor to display them with the right images.


# What can we do about that?

I'm not claiming to have *the* solution to these issues, but I have an
idea: We need a way to "install" controllers/polyfills.

IMO, if we had a way to install controllers/polyfills/frameworks once and
reuse them across sites, that'd be a good start to solving most of the
above problems.
It'd give us the ability to share JS code between sites, and the
performance implications of blocking JS at the page's top will bother the
user only once.
It may also enable NavigationController to support a controller on its
first page load (even though I might be ignoring other issues this may
provoke)

I know what you're saying: "That's what the browser's cache is for". I
don't want to be blunt, but you're wrong :P
We've seen this approach fail with JQuery[2][3], because the myriad of
versions Web devs use and the multiple CDN options resulted in
fragmentation causing any JS code sharing across sites to be coincidental.

# How?

A natural model is the Debian/Ubuntu repository model, only with the
browser as the OS, and JS code as the packages.

Imagine a world where browsers come with all the versions of popular Web
frameworks installed, and when a new version comes out, it is automatically
fetched & installed. An evergreen JS framework landscape, if you will.
Web pages will have script tags that include a URL, but also include a
"package name", which the browser can use if it has that package installed.
If it doesn't, it can install it on the spot, or later on (based on
performance considerations).
Installed JS code will not come from the sites themselves, but from the
repositories, which are deemed secure.

Who will control these repositories? In the Debian model, the user decides
which repositories to use, which basically means the OS decides for him. In
the Web's case, that'd be the browser. If you trust your browser vendor
with auto-updates, you should be able to trust him with JS code as well.

# Is it realistic?

I don't know. You tell me.


I'd appreciate any thoughts on these issues and this or any other solution.

Thanks,
Yoav

[1] http://www.stevesouders.com/blog/2012/03/22/cache-them-if-you-can/
[2] http://statichtml.com/2011/google-ajax-libraries-caching.html
[3] http://www.stevesouders.com/blog/2013/03/18/http-archive-jquery/

Received on Wednesday, 26 June 2013 22:45:37 UTC