W3C home > Mailing lists > Public > www-style@w3.org > January 2013

RE: Media Queries and optimizing what data gets transferred

From: Fred Andrews <fredandw@live.com>
Date: Mon, 28 Jan 2013 12:21:44 +0000
Message-ID: <BLU002-W8307E50C8C6260C98B2BA9AA180@phx.gbl>
To: Ilya Grigorik <ilya@igvita.com>
CC: "www-style@w3.org" <www-style@w3.org>

> From: ilya@igvita.com
> Date: Sat, 26 Jan 2013 15:38:03 -0800
> Since I'm the one proposing Client-Hints, let me try to address some of the misconceptions:> 
> (1) It is *not* a question either/or. To the extent possible, both a client-side and a
> server-side solution should be available for content adaptation. Posing Client-Hints
> as "breaking" or impeding client-side adaption misses the point entirely. 

There are good reasons that the client would not want the server to be
adapting content to the client device characteristics so adaptation needs
to be supported on the client side anyway so there is little point having the
server do the adaptation.

> (2) Some things are far better handled by the server, not by the client.
> Things like resizing images to optimal width and height, in the world of
> ever exploding form factors is easily automated at the server level,
> and leads to massive bloat on the client. I don't mean to pick on
> picturefill, but just for the sake of an example:
> https://github.com/scottjehl/picturefill#hd-media-queries

> The above only covers 3 breakpoints and HiDPI. Do you really expect
> everyone will start filling their pages with that or similar markup for
> *every*  image asset on the page? What if we also add a connection
> type constraint into the equation? I think you see where I'm heading
> with this.

This illustrates the problem with the 'media queries' approach.  It just
does not scale.

Implementing adaptation on the server side suffers the same scaling
problem - any device/page characteristic change requires a check with
the server to confirm if adaptation is needed and the possible download
of new resources.  Each added parameter multiples the number of
queries that need to be made.

The solution is adaptation on the client side.  This is why layout is fluid
and SVG was developed.

The difficult problem here is images or videos size.  The srcset proposal
is a 'media queries' based approach and inherits all the same problems.
This has all been explained on the whatwg list and alternatives suggested
but what can you do?  We'll just have to implement alternatives and
demonstrate their advantages.

Having the server adapt the images is certainly not the solution because
the server does not know what the client needs and the clients selection
algorithm is a matter for client UA choice and not a matter that a server
should prescribe.  Your proposal does not expose the resources the
server has available or how the client can request them.

> A case in point is mobile proxies deployed by some of the carriers
> around the world: they resize the images on the fly, and in the
> process achieve *huge* bandwidth savings. Also, note that resizing
> images does not have to come at cost to quality -- especially if
> you're the one controlling how they get scaled and at which
> quality. </tangent>

This does not support server side adaptation, quite the contrary,
because these proxies are part of the the UA not the server.

> Having said that, images is just one example. Client-Hints is a generic,
> cache-friendly transport for client-server negotiation. Due to lack of
> such  mechanism, most people rely on cookies today, which is
> a disaster: not cache friendly, doesn't work cross-origin, forces
> requests to origin servers. Alternatively, if not cookies, the you
> have to buy a commercial device database (device atlas, wurfl, etc). 

Well, they should be using fluid design, SVG, etc.  The User Agent
header should be depreciated.  Passively or active probing of the
UA is a security leak that needs to be closed.  The above approaches
are a dead end.

>In fact, I think your conclusions are almost entirely incorrect: 

Reasons why I think introducing an HTTP-based solution would be worse

than adjusting CSS include:

 * HTTP caches don't know what the negotiation logic is, so they need

to check with the origin server for each Client-Hints header value

that they don't already have a cache key.

> Not true. The whole point of Client-Hints is to enable caches to perform
> "Vary: Client-Hints". What you've described is how the process works
> today... the requests are forced down to origin because we don't have a
> clean cache key to vary on.

I disagree.  'Vary: Client-Hints' just lets the cache know to check this
header, it does not expose the algorithm the server uses to determine
the response.


> Further, the "cost" of upstream bytes, which is in the dozens of bytes,
> is easily offset by saving hundreds of kilobytes in the downstream
> (in case of images). The order of magnitude delta difference is well worth it.

It's not always a clear win.  For example, a change of zoom could require a
new set of images to be downloaded, and such zooming is quite common
on small screen devices.

 * It's bad to have to add server-side logic when CSS almost has the

feature authors want but not exactly.

> Not true. From first hand experience with PageSpeed, we know that
> despite continuous reminders web authors are simply not optimizing
> their images correctly: wrong formats, wrong sizes, etc. Automation
> solves this problem. While not 100% related to this discussion,
> see my post here: http://www.igvita.com/2012/12/18/deploying-new-image-formats-on-the-web/

How convenient for your PageSpeed business if the UA outsources its
adaptation to the cloud, further if the adaptation algorithm is a server side
secret and the client is forced to share state that it otherwise could keep


Received on Monday, 28 January 2013 12:22:16 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:08:25 UTC