Re: AppCache post-mortem?

On 17/04/2013 11:07 , Dominique Hazael-Massieux wrote:
> The goal is not to distribute blame to specific people or organizations,
> but rather to find where we are structurally ill-prepared to deal with
> providing such fundamental technologies.

For the record, I want to be clear that I'm taking the above seriously 
and not at all ascribing blame. In any case this is a high-profile 
feature that got implemented and shipped universally; the way I see it 
we all screwed up there.

> Such a post-mortem would answer for example the following questions:
> * why has it taken so long between the introduction of the technology
> (2007) and realization it was deeply broken (2011?)?

To begin with, the problem space is not trivial at all. It relies on 
parts of the platform that are broadly undefined, notably the fetch 
algorithm, and how caching actually works. Things like fetching have 
rather far reaching and pervasive effects on the stack, and rather than 
trying to extricate things in such a way as to make the problem 
addressable if not as a component, then at least in a sort of 
semi-insulated manner, AppCache was developed by directly modifying all 
the aspects that it influenced.

The first effect of this was that it made the feature very hard to 
review in the specification. For instance, one implementer had bugs in 
their first attempt that were due to them missing one of the 
modifications that AppCache entailed.

As if that would not have been enough to cause problems, AppCache also 
compounded the issue by introducing too much magic. The number of things 
that happen when you type just a few characters into a manifest is 
staggering. I'm a huge fan of DWIM myself, but there's a limit beyond 
which using too much magic just turns you into Evil Willow.

The spec being hard to review also meant that implementations were of 
very uneven quality for a rather long while. This made it difficult to 
separate implementation issues from specification issues.

So at the end of the day, if you pile all of the following together:

  - hard to separate/layer
  - hard to spec
  - hard to review
  - hard to implement
  - hard to test

You get a long delay before someone goes "Hang on, it's neither me nor 
my browser that's broken".

 > What approaches would make this problem less likely to repeat?

I think that the first thing to take away is that irrespective of 
whether you believe in modularity for the web platform or not, if you 
have to modify large swathes of a large spec (e.g. HTML) to add just the 
one feature, even if it's a powerful feature, then you should probably 
start seeing red flags.

If implementers are not noticing some parts of a feature in a spec, then 
that's another red flag.

Also, magic is great when it emerges from the simple interaction of 
simple components. But if you can't understand where the magic is coming 
from, then assume it's dark magic. That's why one of the principles 
behind NavCon is that it should be a "Bring Your Own Unicorn API".

> * why has it taken almost 2 years since that "realization" and the
> appearance of alternative proposals?

I would say that that stems from a variety of factors, mostly human, 
such as not wanting to interfere with the ongoing development of the 
HTML specification, finding the energy to untangle the conceptual mess 
that you need to untangle before you can even think of an alternative 
design, etc.

That's why unglamorous work like that which Anne is doing on Fetch is so 
important: it makes introducing new features at higher layers a *lot* 
easier.

> * does it reflect a broader desktop-focused approach to the development
> of Web technologies?

I don't think so, it's not better suited to desktop than it is to 
mobile. I do think that the feature might have been different if more 
and better use cases had been provided, but I'm not sure that we really 
had that knowledge back then.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon

Received on Monday, 6 May 2013 14:01:54 UTC