W3C home > Mailing lists > Public > public-web-perf@w3.org > April 2013

Re: <img src="..." defer>

From: James Simonsen <simonjam@google.com>
Date: Thu, 18 Apr 2013 21:22:34 -0700
Message-ID: <CAPVJQi=Nmw0AGV+gu9EZm+wzqmAm8Ns2rruHf4NqVwfoTcGBCA@mail.gmail.com>
To: Jake Archibald <jakearchibald@google.com>
Cc: public-web-perf <public-web-perf@w3.org>
On Thu, Apr 18, 2013 at 6:22 AM, Jake Archibald <jakearchibald@google.com>wrote:

> Browsers that don't have accurate into on viewport entry should trigger
> the download sooner rather than fail to start downloading an image that
> should be in-view. The spec makes provision for this. If the browser has
> genuinely no idea on the scroll position of the viewport, it would opt to
> download images as soon as they get layout (don't calculate to
> display:none).
> Whether browsers are capable of knowing what's in their viewport,
> developers are using hacky JS to try to create this behaviour and
> conditional loading. This is a great opportunity to deliver this feature in
> a network & device sensitive way.
> Requiring an image to download _at the latest_ when it's in view is simply
> common sense, otherwise we're saying images are entirely optional.

I think we're talking past each other here. :( I fully get your point.

Obviously it's in everyone's best interest that the image download before
the user gets to it. Browsers will strive to make sure that happens.

I'm saying spec'ing anything with "MUST" and "viewport" makes it super hard
to be exactly compliant and to test for compliance.

Separately, I think MUST might be a little strong. Let me give you another
example... Imagine the user quickly fling scrolls past hundreds of pages.
Must we fetch every image along the way just because each was in the
viewport for 1/60th of a second? I'd argue it'd be better to skip them.

The BBC & Guardian look at the viewport and set the main article image src
> to one more appropriate to the screen width. Unfortunately, the initial src
> has already started download by this point so they end up downloading two
> images. They could handle the image purely with js, but they want a non-js
> fallback without doubling every image up with a <noscript>.
> This particular issue will eventually be solved by adaptive imagery. But
> other image pollyfills based on device capabilities will continue to have
> the same issue.

Ok, I see.

Why wouldn't we rely on an adaptive imagery-like technique all of the time
though? It seems like if that works now, we should just continue to use it
in future situations. Polyfill is always going to have the problem of
<noscript>. And it seems hokey to go out to JS and patch up all the tags.
We should just have the style system do the right thing.

Received on Friday, 19 April 2013 04:23:02 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:04:35 UTC