- From: Ilya Grigorik <igrigorik@gmail.com>
- Date: Mon, 24 Aug 2015 15:40:42 -0700
- To: Amos Jeffries <squid3@treenet.co.nz>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <CAKRe7JEf5zwYyg+KRwd-XaZDW_HFo9uGz0-X6S1zF51O9L=59Q@mail.gmail.com>
On Sun, Aug 23, 2015 at 11:13 PM, Mark Nottingham <mnot@mnot.net> wrote: > We discussed this document in Dallas, and also a bit in Prague: > <http://tools.ietf.org/html/draft-grigorik-http-client-hints-02> > That draft is a bit out of date, please see: http://tools.ietf.org/html/draft-grigorik-http-client-hints-03 On Mon, Aug 24, 2015 at 8:05 AM, Amos Jeffries <squid3@treenet.co.nz> wrote: > I've always been a little mystified why these were even being asked for. > The tools available for on-device decisions about display are pretty > good, an coding frameworks make development for those easy. > For good or worse, many apps use signals like DPR and viewport size as routing decisions to deliver alternate experiences. Others use it as a latency optimization to avoid shipping unnecessary bytes (your eliding use case), or to avoid incurring an extra RTT to determine which resources are need. In absence of CH we have commercial device databases based on UA sniffing, developers stuffing this data into cookies, or delayed fetches due to the extra RTT's required to first fetch necessary conditional logic. All of which are painful and expensive, both for server and app developers. Granted, CH doesn't magically solve all that, but it does establish a common language and mechanism to remove the biggest pain points. > The URL-path with values in such places like filenames are just the > optimal form in terms of reduced latency from faster cache aggregation > and lookup. Actively increasing network latency by using negotiated > features seems a daft approach when its unnecessary. > They may be optimal for server and proxy developers, but they're not always optimal for app developers. For example, there is a lot of content where you can't update the URLs easily or at all, and yet many organizations that have such content still want to deliver optimized resources. Similarly, even when you may be able to update the URLs, how many variants are you willing to list in your markup? For DPR-coverage of image assets alone you need at least three resolutions (1x/2x/3x), but there is also a wide array of devices in middle that you'll be serving wasted bytes to. Also factor in viewport-width, and more dynamic properties like user preferences for 'reduced data usage', etc, and now you're looking at a rather complicated proposition just to serve a single image asset... Then <your favorite vendor> comes out with a new shiny resolution and you have to go back and add that case to all of your pages. Granted the above work still has to happen somewhere... So, either we punt this to the developer and mandate that they need to list every plausible variant and keep it up to date (which is how we end up where we are today, with loads of poorly optimizing images -- e.g. ~50% compression savings from various browser proxies, 90% of which is from image resizing+scaling), or we allow the UA + server to automate some or all of it. CH is about enabling the latter. ig
Received on Monday, 24 August 2015 22:41:50 UTC