Fw: what to do with invalid (or improper) mime-type resources

> It really can't be.  Whether the browser decides to cache a particular
> resource, and for how long, depends on many factors that cannot be
> specified, such as: how much disk space is available, how likely the
> ...
> not something that can be standardized.
> Similarly with loading.  Browsers need to be able to make intelligent
> decisions on what to load when, including deciding (perhaps based on
> ...
> might legitimately decide not to prefetch something no matter what the
> author thinks.

I certainly understand why browsers (and thus end-users, often on more
resource limited devices/connections) *can* benefit from such algorithmic
freedom in the intelligence of the browser.

But I would argue that such freedoms should be restricted to elements that
are parser-inserted (in the markup). When we're talking about JavaScript
that programmatically creates a <object>, `Image`, <script>, or <link>
elements, for the express purpose of loading (or preloading) an external
resource, I think the author should be able to access a very well defined
and predictable set of behavior.

The "no matter what the author thinks" part is highly disturbing. How can I
do anything effectively if the browser is always second-guessing what I'm
directly telling it to do?

What sucks for authors right now is that this type of on-demand loading (and
caching) behavior (which script/css loaders try to manage, mainly for
performance reasons, but also for code maintenance reasons, like modularity)
is all over the map cross-browser. This makes our loaders significantly more
complex than they should/really need to be, and also less efficient.

> It might be a good idea to have some way for the author to hint to the
> UA that they think some resource is likely to be used soon

A "hint" is completely unuseful in the use-case being discussed. A
script/css loader needs determinate and predictable behavior, or it's
completely useless. <link rel=prefetch> is a "hint" system, and it does
nothing to help this use-case. In fact, I'd argue it's dangerous because it
seems like it should be helpful, but the guidance given by the spec to
browsers is only a "hint" for incredibly undefined "idle time", which means
any author that's relying on such gray-area behavior is playing browser
roulette at best.

> But we cannot require UAs to preload
> or cache anything at any particular point.  Preloading and caching are
> always tradeoffs,

Again, this I just fundamentally disagree with. Why can't we say that the
direct proactive facilities/mechanisms available to JavaScript developers
are strictly and narrowly defined, based on direct and determinate behavior,
and leave the "browser-smart" freedom of algorithm stuff to the less
explicit markup driven parts?

Can you imagine if the XHR facility had been specified using this
wishy-washy "hint" type language? "XHR.send() will tell the browser that
you'd like an Ajax request to go out at some point in the near future, if
the browser thinks it's a good idea, and there's not much else going on. And
the response will probably come back soon, but the browser is free to delay
handing you back the response if it feels like you're better off without it
at that moment."

> Actually, there's significant innovation about loading and caching
> right now.  In just the past year or two, browsers have gotten much
> smarter about preloading resources, like gaining the ability to load
> scripts in parallel.  Someone is working on pipelining for Firefox.
> I've also seen people talk about possibly prioritizing scripts and
> styles over images in caches, because scripts and styles block page
> rendering and images don't.  Browsers have been drastically increasing
> their cache sizes.  Etc.

OK, fair enough, some of the most useful advances have happened in the last
year or two. And yes, there are other cool things they can still experiment

FWIW, I'd be in favor of browsers experimenting with alternate cache
management strategies (some of what you mention here), but I think *any*
such experimentation that leads to web authors (resource loaders) having
further indeterminate behavior is only asking to create more problems than
the experimentation tries to solve.

It seems like the uneasy tension here is that the spec wants to let browsers
innovate in features, which (by virtue of the unpredictability of
"undefined" and experimentation cross-browser) handcuffs script developers
that want to create more intelligence/innovation in the same areas. Can't
there be a better balance where the needs of script-based resource loaders
get some favor from the spec process instead of leaving everything so open
only in favor of browsers (which most of us don't have much influence over)?


Received on Monday, 20 December 2010 05:08:08 UTC