W3C home > Mailing lists > Public > www-style@w3.org > January 2013

Re: Media Queries and optimizing what data gets transferred

From: Henri Sivonen <hsivonen@iki.fi>
Date: Wed, 30 Jan 2013 09:40:01 +0200
Message-ID: <CAJQvAufUx6pfvkQ8u-ud+e7Xp3tBt5zFg=vhT+EH1w8RjjkKDQ@mail.gmail.com>
To: Ilya Grigorik <ilya@igvita.com>
Cc: www-style@w3.org
On Wed, Jan 30, 2013 at 12:55 AM, Ilya Grigorik <ilya@igvita.com> wrote:
> (1) I'm all for markup based solutions.

It doesn't really look like it.

> (2) Your "nolive" proposal doesn't actually address what I'm after, as
> several others have already pointed out.

Sorry for being dense, but could you, please, point me to a specific
message? My reading of what Boris and Tab said was that providing a
mechanism for opting out of synchronous CSSOM access and making
browsers defer the fetching of OM-opted-out inapplicable style sheets
until applicable would address your problem as far as style sheets go.

> (4) Even if we have (1), and we don't, this is not an argument against HTTP
> negotiation. There is a time and place for both.

If we had #1, having another way to accomplish similar goals would be
redundant, so if we had #1, it wouldn't be at all clear that there'd
be a time and place for HTTP negotiation.

Argument of the form "we don't have solution A, so I'm proposing
solution B which is neither intrinsically better than A nor requires
the buy-in of fewer entities to succeed" is a basic fallacy in
proposing new Web features.

> (5) Cache "fragmentation", whether based on unique URLs or on Vary is the
> same - stuffing parameters into URLs doesn't magically increase cache
> hit-rates.

Consider the simple case where the origin server has two alternative
byte representations for a thing. In the case where both
representations have distinct URLs and all browsers end up fetching
either one, an intermediate cache gets a 100% hit rate without having
to check back with the origin server as soon as there has been one
browser to fetch one of the representations and another browser to
fetch the other representation. If the two byte representations have
the same URL and Vary: Client-Hints, the intermediate cache has to
consult with the origin server every time it sees a value of
Client-Hints it has not seen before. Thus, the need to consult with
the origin server is a function of the number of unique Client-Hints
values rather than a function of distinct byte representations on the
origin server.

Is this analysis wrong? If it's wrong, why is it wrong? As far as I
can tell, single URL + Vary is always worse than URL-per-byte-sequence
if the origin server has a limited handful of distinct byte sequences
to serve.

> (6) We can't rely on cookies.

Is this because login cookies are unique and put each client in a
unique Vary box or is this something else? The other reasons you have
given against using cookies are reasons that also apply to
Client-Hints. The simplest proof is that you could put the string you
propose to be put in Client-Hints into Cookie instead and do Vary:

> (7) You shouldn't have to buy a commercial database to perform content
> adaptation.

You don't need to with Media Queries.

> (8) See (1), then (3).

There was no (3).

>> In your proposal, server negotiation involves putting data in a
>> request header. How would the browser's HTTP stack have more
>> information than its preload scanners, etc.?
> HTTP requests are scheduled by the preloader.

That does not at all explain why the HTTP stack would have better
information about screen dimensions and density than the pre-load

>> I think the argument that's bogus is saying that Vary makes stuff
>> cache-friendly if what it ends up doing is making cache entries
>> practically never valid without checking with the origin server.
> Lookup the difference between maxage and revalidation. I think therein lies
> our disconnect. Vary does not force revalidation on every request.

Can you explain why Vary would not cause a revalidation when the
intermediate cache sees the value of Client-Hints that it has not seen
before? Surely without revalidation, the generic intermediate cache
has no way of knowing if a value of Client-Hints that it has not seen
before should result in serving a byte sequence that's already in the

>> Well, obviously, since the client-side functionality is not there at
>> present. Your solution is not there currently, either. That sites are
>> using the option(s) currently available to them is no proof that your
>> currently non-deployed solution that is similar to the currently
>> available solution is better than and different presently non-deployed
>> solution.
> I have already run the proposal by half a dozen CDN vendors - they're all
> interested in leveraging it, assuming the browser is able to provide the
> hint.

It isn't particularly surprising that CDN vendors are okay with an
HTTP-level feature that allow them to sell a new feature at a premium
price. However, assessing what solution is better is not just a matter
of consulting with CDN vendors. There are other stakeholders, too,
including authors and end-users (who, in practice, are represented by
implied proxy by browser developers).

>> > For opt-in, a mechanism similar to Alternate-Protocol can be provided:
>> >
>> > http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft2#TOC-Server-Advertisement-of-SPDY-through-the-HTTP-Alternate-Protocol-header
>> This requires an HTTP round-trip to the server, so this kind of opt in
>> does not solve the problem of varying the top-level HTML in the
>> low-latency way upon first contact. I thought addressing that problem
>> was in scope for Client-Hints and one of the main motivators of
>> choosing a solution that puts stuff in the HTTP request.
> Alternate-Protocol is a sticky hint. You default to off on a site you've
> never seen before. Once the hint is provided, the browser remembers it and
> provides the header on all future requests.

By "first contact" I meant really the *first* contact when the browser
has not contacted the site before as far as the browser remembers.
(Again, except for the complications of login cookies, cookies can be
exactly as sticky.)

>> Even if it doesn't overflow the congestion window, do you have an
>> explanation for how it wouldn't matter towards data metering unmetered
>> mobile connections?
> Let's do the math: 80 requests on an average page, half of them images.
> Let's say CH adds 50 bytes per request.. That's ~2KB in upstream. The same
> page (1200kb avg -- HTTP Archive), has 60% of bytes in images. So, to offset
> those 2kb, I would need <1% improvement in saved bytes in downstream. In
> practice, mod_pagespeed offers ~30% .. today. If you're really concerned
> about metered connections, then this a slam dunk.

Assuming that the user's browsing consists of enough mod_pagespeed
sites to offset the loss on other sites (if there is no opt-in).

>> "You don't need to use it" does not refute the learnability argument.
>> If there are more solutions to choose from, you need to learn about
>> them in order to make the choice what not to use.
> That's not an argument, it's your opinion. If you don't want to leverage
> HTTP negotiation, don't.

The effect of the proliferation of choice on learnability that I
stated is more than just my opinion. How you weight that effect
against other factors may be a matter of opinion, though.

Henri Sivonen
Received on Wednesday, 30 January 2013 07:40:29 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:21:04 GMT