Re: Media Queries and optimizing what data gets transferred

Henri, thanks for starting this thread.

Since I'm the one proposing Client-Hints, let me try to address some of the
misconceptions:

(1) It is *not* a question either/or. To the extent possible, both a
client-side and a server-side solution should be available for content
adaptation. Posing Client-Hints as "breaking" or impeding client-side
adaption misses the point entirely.

(2) Some things are far better handled by the server, not by the client.
Things like resizing images to optimal width and height, in the world of
ever exploding form factors is easily automated at the server level, and
leads to massive bloat on the client. I don't mean to pick on picturefill,
but just for the sake of an example:
https://github.com/scottjehl/picturefill#hd-media-queries

The above only covers 3 breakpoints and HiDPI. Do you really expect
everyone will start filling their pages with that or similar markup for
*every*  image asset on the page? What if we also add a connection type
constraint into the equation? I think you see where I'm heading with this.

A case in point is mobile proxies deployed by some of the carriers around
the world: they resize the images on the fly, and in the process achieve
*huge* bandwidth savings. Also, note that resizing images does not have to
come at cost to quality -- especially if you're the one controlling how
they get scaled and at which quality. </tangent>

Having said that, images is just one example. Client-Hints is a generic,
cache-friendly transport for client-server negotiation. Due to lack of such
 mechanism, most people rely on cookies today, which is a disaster: not
cache friendly, doesn't work cross-origin, forces requests to origin
servers. Alternatively, if not cookies, the you have to buy a commercial
device database (device atlas, wurfl, etc).

In fact, I think your conclusions are almost entirely incorrect:


> Reasons why I think introducing an HTTP-based solution would be worse
> than adjusting CSS include:
>  * HTTP caches don't know what the negotiation logic is, so they need
> to check with the origin server for each Client-Hints header value
> that they don't already have a cache key.
>

Not true. The whole point of Client-Hints is to enable caches to perform
"Vary: Client-Hints". What you've described is how the process works
today... the requests are forced down to origin because we don't have a
clean cache key to vary on.

 * If the origin server doesn't get ETags right, intermediate caches
> end up having a distinct copy of the data for each distinct
> Client-Hints header value even if there is a smaller number of
> different data alternatives on the origin server.
>

Etags has *nothing* to do with this, and ETags is also not a mechanism to
vary different responses to begin with.


>  * Pushing the resource selection logic in the browser and presenting
> the browser with different URLs to choose from allows HTTP
> intermediaries to operate simply with URL cache keys. Also, no special
> logic is needed on the origin server.
>

Correct. See my earlier comment about supporting both client and
server-driven negotiation. They are not mutually exclusive.


>  * Sending any HTTP header incurs extra traffic for all the sites that
> don't pay attention to Client-Hints. That would be the whole Web at
> least at first. That is, an HTTP-based solution involves a negative
> externality for non-participating sites.
>

This is easily addressed by making it an opt-in mechanism for HTTP 1.1. An
equivalent mechanism to "Alternate-Protocol" can be added and Client-Hints
would only be triggered for sites that want this data. For HTTP 2.0, with
header deltas the overhead is a single header line.

Further, the "cost" of upstream bytes, which is in the dozens of bytes, is
easily offset by saving hundreds of kilobytes in the downstream (in case of
images). The order of magnitude delta difference is well worth it.


>  * If we later find out that Client-Hints hasn't become popular enough
> to justify the extra traffic, we will be unable to unbloat the HTTP
> requests, because there's always *some* site that would break in the
> face of such unbloating.
>

See my comment about opt-in.


>  * It's bad to have to add server-side logic when CSS almost has the
> feature authors want but not exactly.
>

Not true. From first hand experience with PageSpeed, we know that despite
continuous reminders web authors are simply not optimizing their images
correctly: wrong formats, wrong sizes, etc. Automation solves this problem.
While not 100% related to this discussion, see my post here:
http://www.igvita.com/2012/12/18/deploying-new-image-formats-on-the-web/


>  * It's conceptually bad in terms of the learnability of the Web
> Platform that a slight adjustment to the desired behavior would
> involve changing the solution the author needs to apply to a
> completely different technology in the stack (from MQ to HTTP
> negotiation).
>

Once again, they're not exclusive. If you don't have a server that can
support image optimization, you should be able to hand-tune your markup.
I'm all for that.

ig

Received on Monday, 28 January 2013 09:37:10 UTC