W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2012

Re: [whatwg] Features for responsive Web design

From: Matthew Wilcox <mail@matthewwilcox.com>
Date: Tue, 15 May 2012 10:38:26 +0100
Message-ID: <CAMCRKiKYjeOsdi=wnkDjZvmi1HXKLg09z9eAbKrbLwOJKzPpfw@mail.gmail.com>
To: Ian Hickson <ian@hixie.ch>
Cc: whatwg@whatwg.org
Please, have you taken a look at the latest idea?


It solves many issues:

1) works with pre-fetch
2) is not verbose
3) is backward compatible with current browsers
4) is aimed for future-proofing - a re-design later with new or
different breakpoints would result in no edits to the mark-up
5) Avoids repetition and excessive processing of multiple media tests

I'd greatly appreciate more feedback on this approach at the Community Group.

Kind regards,
Matt Wilcox

On 15 May 2012 08:28, Ian Hickson <ian@hixie.ch> wrote:
> On Wed, 25 Jan 2012, Matthew Wilcox wrote:
>> On 24 January 2012 23:26, Ian Hickson <ian@hixie.ch> wrote:
>> > On Wed, 24 Aug 2011, Anselm Hannemann - Novolo Designagentur wrote:
>> > >
>> > > As we now have the possibility of creating fluid and responsive
>> > > layouts in several ways we have a problem with images.
>> > >
>> > > There's currently no good feature to implement something like
>> > > responsive images which adapt to the different device-resolutions.
>> > > We only can implement one image with one resolution scaling-up and
>> > > down.
>> >
>> > You can do adaptive sites using media queries.
>> >
>> >   <!-- HTML -->
>> >   <h1>My Site</h1>
>> >
>> >   // CSS
>> >   @media (min-width: 320px and max-width: 640px) {
>> >     h1::before { content: url(http://cdn.url.com/img/myimage_xs.jpg) }
>> >   }
>> >   @media (min-width: 640px and max-width: 1024px) {
>> >     h1::before { content: url(http://cdn.url.com/img/myimage_m.jpg) }
>> >   }
>> >   @media (min-width: 1024px) {
>> >     h1::before { content: url(http://cdn.url.com/img/myimage_xsl.jpg) }
>> >   }
>> This is of no use to content images - which are the real problem. CSS
>> supplied images are not an issue.
> Fair enough.
> Looking at the feedback on these threads over the past few months (I
> didn't quote it all here, but thank you to everyone for making very good
> points, both here on the list and on numerous blog posts and documents on
> the Web, referenced from these threads), it seems there are three main
> axes that control what image one might want to use on a page, assuming
> that one is rendering to a graphical display:
>  - the size of the viewport into which the image is being rendered
>  - the pixel density of the display device
>  - the network bandwidth available to the page to obtain resources
> Now I'm not sure what to do about the bandwidth one. It's very hard for a
> user agent to estimate its bandwidth availability -- it depends on the
> server, and the network location of the server, almost as much as on the
> location of the client; it depends on the current network congestion, it
> depends on the other things the browser is doing; it depends on whether
> the user is about to go through a tunnel or is about to switch to wifi; it
> depends on whether the user is roaming out of network or is on an OC48
> network pipe. It's hugely variable. It's not clear to me how to
> characterise it, either. It's also something that's changing very rapidly.
> On the more modern mobile networks, the real problem seems to be latency,
> not bandwidth; once you've actually kicked off a download, you can get the
> data very fast, it just takes forever to kick it off. That kind of problem
> is better solved by something like SPDY than by downloading smaller
> images. Downloading smaller images also screws up zooming images, which
> happens a lot more on mobile than on desktop.
> A number of people proposed solutions that are variants on the
> <video>/<source> mechanism, where you have an abundance of elements to
> replace the lonely <img>. Looking at the examples of this, though, I could
> not get over how verbose the result is. If we're expecting this to be
> common -- which I think we are -- then really can't be asking authors to
> be providing dense collections of media queries, switch-statement like
> lists of URLs, and so forth, with each image.
> Nor can we ask authors to provide a default and then have an external CSS
> file give alternatives. The syntax is different (and in some proposals
> actually already possible today), but the fundamental problem still
> exists: it's way too much work for just inserting an image in a page.
> Another proposal that various people advocated is a header that the
> servers can use to determine what content to use. Besides a wide number of
> problems that people pointed out with this on the thread (such as the
> privacy issues due to fingerprinting, the many ways that such information
> gets abused, the high aggregate bandwidth cost, the difficulties with
> using headers in offline scenarios, etc), the biggest problem with this
> idea, IMHO, is that authors have shown that HTTP headers are simply a
> non-starter. Content-Type headers are perenially wrong, Accept headers are
> mishandled all over the place, adding Origin headers has caused
> compatibility issues... HTTP headers are a disaster. If there's ever an
> option to solve a problem without HTTP headers, we should take it, IMHO.
> On Tue, 7 Feb 2012, Kornel LesiÅ~Dski wrote:
>> You could just say "I've got these image sizes available, choose which
>> one suits you best", and browser would pick one that makes most sense.
>> You (and every other web developer) wouldn't have to write and maintain
>> code for computation of bandwidth/latency/battery/screen
>> size/density/zoom/cpu speed/memory tradeoffs. With so many variables I'm
>> afraid that average developer wouldn't make better choices than mobile
>> browsers themselves can.
> Indeed.
> One thing I noticed when looking at the proposals is that they all
> conveyed a lot of information:
>  - a list of files, each with a particular dimension in CSS pixels and
>   each intended for a particular display pixel density, bandwidth
>   level, and layout dimensions
>  - a list of conditions consisting of layout dimensions, pixel densities,
>   bandwidth characterisations
>  - glue to keep it all together and to tie in non-graphical fallback
> A lot of this is redundant, though. When would an author ever say "hey,
> use my 1024x768 image" when the display is 640x480? When would an author
> say "use this 4 device-pixel-per-CSS-pixel image" when the display is a
> regular low-density display?
> In fact, as Kornel suggests, would it be possible to just enumerate the
> files and their characteristics, and let the browser figure out which one
> to use?
> Of course, if we do that, we still end up having to list a lot of
> redundant information, namely which URL maps to which set of
> characteristics. That's sad, because it's almost always going to be the
> case that the author will put the characteristics in the filename, since
> otherwise it becomes a maintenance nightmare.
> So why not just give the UA the characteristics and a template to use to
> build the file names itself? That way we still give the UA all the same
> information, but it is much less verbose and still solves all the same use
> cases. Thus:
>   <img src="face-600-200@1.jpeg" alt=""
>        src-template="face-%w-%h@%r.jpeg"
>        src-versions="600x200x1 600x200x2 200x200x1">
> (The first src="" would be optional; its purpose is legacy fallback.)
> The algorithm for picking an image could be to sort the images by width,
> and remove all those that are wider than the available width (except for
> the narrowest one if they're all too wide), then sort them by height and
> remove all those that are taller than the available height (except the
> shortest one if they are all too tall), then sort them by pixel density
> and remove all those that are for densities greater than the current one
> (except the lowest one if they are all greater), then remove all those
> that are for densities less than the current one (except the highest one
> if they are all lower), then of the remaining images pick the widest one,
> tie-breaking by picking the tallest one (that should leave just one
> possible file name).
> This doesn't handle bandwidth concerns; as mentioned above, I'm not sure
> how to do that even at a theoretical level.
> As people on #whatwg pointed out when I floated it there, the problems
> with this idea are two-fold: first, by helping authors who use consistent
> naming schemes it forces all authors into using schemes that have the
> height, width, and/or resolution baked in literally, and second, it forces
> authors who only care about one or two of the axes to list all three.
> For example, some people only want to decide whether to use "low res" and
> "retina" versions of their images, with filenames like "foo.jpeg" and
> "foo-HD.jpeg". They don't want to specify dimensions at all.
> Over in CSS land, there have been proposals for similar features, e.g.
> image-set:
>   http://lists.w3.org/Archives/Public/www-style/2012Feb/1103.html
> In fact, hober proposed a variant of that on this list recently too.
> If we use a syntax a bit like that one, we could increase the verbosity a
> little, while handling a few more cases:
>   <img src="face-600-200@1.jpeg" alt=""
>        srcset="face-600-200@1.jpeg 600w 200h 1x,
>                face-600-200@2.jpeg 600w 200h 2x,
>                face-icon.png       200w 200h">
> The verbosity is still not completely insane (though now we are back to
> duplicating information a bit), but we have more flexibility in the
> filename scheme and less restriction on the list of values that have to be
> specified.
> The algorithm here could be to sort the images by width, and remove all
> those that are wider than the available width (except for the narrowest
> one if they're all too wide) or that don't have a width unless none have
> widths, then sort them by height and remove all those that are taller than
> the available height (except the shortest one if they are all too tall) or
> that don't have a height unless none have heights, then sort them by pixel
> density and remove all those that are for densities greater than the
> current one (except the lowest one if they are all greater), then remove
> all those that are for densities less than the current one (except the
> highest one if they are all lower), assuming that any without a specified
> density are implicitly 1x, then of the remaining images pick the widest
> one, if any have a width, tie-breaking by picking the tallest one, if any
> have a height, finally tie-breaking by picking the first one, if none have
> any dimensions.
> If a user agent has picked a resource with a pixel density other than 1x,
> it would scale its intrinsic dimensions by the reciprocal of the pixel
> density (i.e. if the pixel density is given as 1.5x, then a 150 pixel
> image would be rendered at 100px (CSS pixels), btu with the full 150
> pixels used for rendering if the display has a higher resolution than the
> CSS pixel 96dpi-equivalent).
> Authoring-conformance-wise, that means that if any specify a width, they
> all must; if any specify a height, they all must; and omitting the pixel
> density is fine but is treated as 1x. At least one of the three must be
> specified, since otherwise a comma after the value would be confused as
> being part of the URL. No two entries can have the same descriptors.
> For convenience we could say that there is an implicit entry with no
> height and width and with resolution 1x that is the value of the src=""
> attribute, so then to have a 1x/2x alternative we'd just write:
>   <img src="logo.png" alt="SampleCorp" srcset="logo-HD.png 2x">
> The problem with this proposal is that user agents want to prefetch the
> images before they start layout. This means they don't know what the
> available dimensions _are_.
> In practice, the only information regarding dimensions that can be
> usefully leveraged here is the viewport dimensions. This isn't the end of
> the world, though -- there's often going to be a direct correlation
> between the dimensions of the viewport and the dimensions of the images.
> For example, a page's heading banner will typically be as wide as the
> page. If there's two columns, each column is likely to be half the width
> of the page. If there's a fixed-width column that has one width at wide
> resolutions and another width at narrow resolutions, then it will likely
> have a graphical header, or background image, that is also fixed width,
> with a different fixed width based on the width of the page.
> The net result of this is that we can just change the proposal above to
> use the viewport dimensions instead of the available width, and it should
> work about as well. It does mean, though, that we can't use the height=""
> and width="" attributes as fallback dimensions for the other ones.
> Another change we can make is to not require that all candidates have all
> the descriptors. We already said above that a missing pixel resolution
> would mean 1x, but what about missing dimensions? Well, one option is to
> just say that if you have no dimensions, you're appropriate for infinitely
> wide screens. This gets around the problem of saying that you have to
> basically give an arbitrarily large dimension for the biggest image, or
> saying that we have to have to use the widest image if none of the images
> are wide enough. (And ditto height.)
> The resulting proposal is what I've put in the spec.
> On Sat, 4 Feb 2012, irakli wrote:
>> Something as simple as if browsers passed along device's width/height
>> information as part of the initial request headers would go a very very
>> long way, making it possible to make a lot of intelligent decisions on
>> the server-side (eventually allowing "media-queries-like" systems on the
>> server-side).
> I don't think it makes sense to base anything on the _device_ dimensions.
> You'd want to base it on the dimensions of the available space, which can
> change dynamically. (On my iMac at home, I never browse full-screen.)
> On Mon, 6 Feb 2012, Matthew Wilcox wrote:
>> The problem with using viewport instead of device size is client-side
>> caching. It completely breaks things. As follows:
>> 1) The user requests the site with a viewport at less than the device
>> screen size.
>> 2) The user browses around for a bit on a few pages.
>> 3) The user maximises their browser.
>> 4) All of the images now in the browser cache are too small.
>> How does the client know to re-request all those pre-cached images,
>> without making an arduous manual JS-reliant cache manifest? Or without
>> turning off caching entirely?
> With the logic proposed above, the user agent could change the image on
> the fly. It would also handle the user zooming, the user changing
> monitors, the user plugging in a new monitor, etc.
> On Mon, 6 Feb 2012, James Graham wrote:
>> On Mon, 6 Feb 2012, Boris Zbarsky wrote:
>> > On 2/6/12 11:42 AM, James Graham wrote:
>> > Sure.  I'm not entirely sure how sympathetic I am to the need to
>> > produce "reduced-functionality" pages...  The examples I've
>> > encountered have mostly been in one of three buckets:
>> >
>> > 1) "Why isn't the desktop version just like this vastly better mobile one?"
>> > 2) "The mobile version has a completely different workflow necessitating a
>> > different url structure, not just different images and CSS"
>> > 3) "We'll randomly lock you out of features even though your browser and
>> > device can handle them just fine"
>> The example I had in mind was one of our developers who was hacking an
>> internal tool so that he could use it efficiently on his phone.
>> AFAICT his requirements were:
>> 1) Same URL structure as the main site
>> 2) Less (only critical) information on each screen
>> 3) No looking up / transfering information that would later be thrown away
>> 4) Fast => No extra round trip to report device properties
>> AFAIK he finally decided to UA sniff Opera mobile. Which is pretty sucky even
>> for an intranet app. But I didn't really have a better story to offer him. It
>> would be nice to address this kind of use case somehow.
> I'm not really convinced we want to encourage #2. It drives me crazy when
> I find I can't use a site because the site decided I was using a phone
> instead of a computer. (The others are satisfied by just making the site
> work at all window sizes, nothing to do with mobile vs desktop.)
> On Tue, 7 Feb 2012, James Graham wrote:
>> This basically amounts to "the requirements were wrong". Since the same
>> developer made both the desktop and mobile frontends and he is one of
>> the major users of the system, and the mobile frontend was purely
>> scratching his own itch, I find it very difficult to justify the
>> position that he ought to have wanted something different to what he
>> actually wanted and made.
> I agree that some people do want this. I'm just saying we should probably
> not encourage it.
>> In general the idea that sites/applications should be essentially the
>> same, but perhaps slightly rearranged, regardless of the device they run
>> on just doesn't seem to be something that the market agrees with. It
>> seems to me that we can either pretend that this isn't true, and watch
>> as platform-specific apps become increasingly entrenched, or work out
>> ways to make the UX on sites that target multiple types of hardware as
>> good as possible.
> Honestly I think as devices get more capable, the direction will be
> towards there being just One Web. We're already seeing mobile sites become
> much more functional than they were a few years ago.
> They'll always be a little different, because the user interaction is
> different (touch vs keyboard/mouse, e.g.), but with new product classes
> being introduced (tablets, smaller laptops, bigger "phablets", etc) I just
> don't see it as viable for us to continue having per-product-class Web
> sites; we'll instead see "responsive design".
> On Mon, 13 Feb 2012, Gray Zhang wrote:
>> 1. On a product description page of a shopping site, there are several
>> *main* pictures of the product, along with about twenty or so camera
>> pictures of the product taken from different angles. When the HTML is
>> parsed, browsers by default simultaneously start downloading all images,
>> potentially making some of the *main* ones invisible.
> This seems like something that's currently relatively easily handled using
> hidden="" or CSS, with some JS (or more CSS) to decide when to show what.
>> 2. On an album page where hundreds of pictures are expected to be shown,
>> it is often required that pictures currently in a user's screen should
>> appear as fast as possible. Loading of a picture outside the screen can
>> be deferred to the time that the picture enters or is about to enter the
>> screen, for the purpose of optimization user experience.
> This seems like something the browser can do automatically today.
>> 3. For a site with limited bandwidth on the server side, it is
>> preferable to minimize the amount of data transferred per each page
>> view. 70% of the users only read the first screen and hence pictures
>> outside the first screen don't need to be downloaded before the user
>> starts to scroll the page. This is to reduce server-side cost.
> This is harder for browsers to do automatically, since many pages depend
> on non-displayed images getting downloaded.
>>  Current Solution and It's Drawbacks
>> The current solution pretty much consists of three steps:
>> 1. The server outputs <img>s with @src pointing to a transparent image,
>> transparent.gif, and with @data-src pointing to the real location of the
>> image.
>> 2. Listen to the window.onscroll event.
>> 3. The event handler finds all <img>s in the visible area and set their
>> @src to @data-src that were stored.
> Not ideal, yeah.
> In practice, this is something that has indeed been done in JavaScript;
> indeed "infinite scroll" pages are something that has become quite common
> (see e.g. Bing's image search results). I don't know that it's especially
> critical to be able to have the <img> elements already in the page not yet
> load; it seems easier to just not add them to the page until the user
> scrolls down.
> On Thu, 10 May 2012, Aryeh Gregor wrote:
>> I'd like to throw in another use-case that might be addressable by the
>> same feature: allowing "Save As..." to save a different version of the
>> image (e.g., higher-res) than is actually displayed.  Wikipedia, for
>> instance, often has very high-res images that get scaled down for
>> article viewing to save bandwidth and avoid possibly-ugly rescaling. (At
>> least historically, some browsers used very bad rescaling algorithms.)
>> It would be nice if when users saved the image, they saved the full-res
>> version.  Perhaps browsers could save the highest-res image available,
>> rather than the one that happens to be used for display right now.
> This seems like it would be handled adequately by the proposal above.
>> Another obvious use-case I'd like to point out is print.  It's not quite
>> as trendy as the iPhone Retina display -- in fact maybe it's getting
>> passé :) -- but print is generally higher-res than display, and it's
>> common for images to appear pixelated when printing.  This use-case
>> might have the same requirements as the iPhone Retina display, but it
>> should be kept in mind in case it doesn't.
> Agreed.
>> A fourth use-case I'd like to suggest is vector images.  Last I checked,
>> many authors don't want to serve SVG directly because too many browsers
>> don't support it in <img> (or at all).  Perhaps it should be possible to
>> specify "vector" or something in place of a scale factor, to indicate
>> that the image should be suitable for all resolutions.
> Will there ever be any browsers that support srcset="" but not SVG?
> On Thu, 10 May 2012, Tab Atkins Jr. wrote:
>> That all said, I don't like the "2x" notation.  It's declaring "this
>> image's resolution is twice that of a normal image".  This has two
>> problems.  For one, we already have a unit that means that - the dppx
>> unit.  Using "2dppx" is identical to the meaning of "2x".  Since
>> image-set() is newer than the dppx unit, we should change it to use
>> <resolution> instead.
> dppx is pretty ugly. I agree with hober's "2x" design.
>> For two, I'm not sure that it's particularly obvious that when you say
>> "2x", you should make sure your image was saved as 196dpi.  You have
>> to already know what the default resolution is.
> You don't have to. The resolution of the image is ignored.
>> As well, I think that values like 300dpi are pretty common, and they
>> don't map to integral 'x' values.  If people say "screw it" and use
>> "3x", this'll be slightly wrong and I think will cause ugly blurring.
>> If we make this take <resolution>, people can just use the dpi unit.
> 3.125x isn't particularly difficult to specify.
> On Thu, 10 May 2012, Mathew Marquis wrote:
>> Hey guys. Don’t know if it’s too early to chime in with this, but we
>> were told by some members of the Chrome team that any browser that
>> supports DNS prefetching — including assets — wouldn’t consider
>> “looking-ahead” on the img tag as an option. The original src would be
>> fetched in any case, saddling users with a redundant download.
> I don't understand what this means.
> On Sun, 13 May 2012, Jason Grigsby described some use cases:
>> Document author needs to display different versions of an image at
>> different breakpoints based on what I’m calling, for a lack of a better
>> phrase, art direction merits.
>> * Example 1: News site shows photograph speaking at a auto factory. On
>> wide screens, the news site includes a widescreen version of the
>> photograph in which the cars being built can clearly be seen. On small
>> screens, if the photograph is simply resized to fit the screen, Obama’s
>> face is too small to be seen. Instead, the document author may choose to
>> crop the photograph so that it focuses in on Obama before resizing to
>> fit the smaller screen.
>   <img alt="Obama spoke at the factory." src="factory.jpeg"
>        srcset="obama-factory-face.jpeg 500w">
>> * Example 2: On the Nokia Browser site where it describes the Meego
>> browser, the Nokia Lumia is show horizontally on wide screens. As the
>> screen narrows, the Nokia Lumia is then shown vertically and cropped.
>> Bryan and Stephanie Rieger, the designers of the site, have talked about
>> how on a wide screen, showing the full phone horizontally showed the
>> browser best, but on small screens, changing the img to vertical made
>> more sense because it allowed the reader to still make out the features
>> of the browser in the image.
>   <img alt="The Nokia Browser for MeeGo can display the BBC site well."
>        src="landscape.png"
>        srcset="vertical-cropped.png 500w">
>> For a variety of reasons, images of various pixel density are needed.
>> These reasons include current network connection speed, display pixel
>> density, user data plan, and user preferences.
>> * Example 1: The use of high-density images for the new iPad on
>> Apple.com.
>   <img alt="" src="ipad@1.png" srcset="ipad@2.png 2x">
>> * Example 2: A user on a slow network or with limited data left may
>> explicitly declare that he or she would like to download a high
>> resolution because they need to see a sharper version of an image before
>> buying product, etc.
> That's up to the UA, but would be possible with the srcset="" feature.
> On Sat, 12 May 2012, Mathew Marquis wrote:
>> I don’t mind saying that the `img set` markup is inscrutable to the
>> point where I may well be missing the mark on what it’s trying to
>> achieve, but it certainly seems to overlap with many of the things for
>> which media queries were intended—albeit in a completely siloed way. As
>> media queries continue to advance over time, are there plans to continue
>> adding functionality to `img set` in parallel? I would hate to think we
>> could be painting ourselves into a corner for the sake of easier
>> implementation on the UA side.
> I could see us adding more things, but I don't think it would be
> automatic, certainly.
> I don't think reusing media-queries directly makes sense, they're a bit
> unwieldy for this kind of thing. I also don't think it would make sense to
> have a direct 1:1 mapping, since that would be more complicated than
> necessary without really solving any more problems.
> On Mon, 14 May 2012, Odin Hørthe Omdal wrote:
>> All optional replacements of the src will have to be fitted in the same
>> box as the original src. That might actually require you to specify both
>> width and height upfront. Of course, people won't really do that, so I
>> guess we're bound to get differing behaviour... Hm.
>> What do people think about that? What happens here? You have no info on
>> the real size of the picture. I guess maybe the browser should never
>> load any srcset alternatives then? If you have no information at all
>> it's rather hard to make a judgement.
>> A photo gallery wants to show you a fullscreen picture, and give you:
>>    <img src=2048px.jpg srcset="4096px.jpg 2x">
>> In this example, us (humans :P) can easily see that one is 2048 px and
>> the other 4096 px. If I'm viewing this on my highres Nokia N9, a naïve
>> implementation could pick the 2x, because it knows that's nicely highres
>> just like its own screen.
>> But it would actually be wrong! It would never need anything else than
>> the 2048 px for normal viewing because it is anyway exceeding its real
>> pixels on the screen.
> The way I specced it, the 4096 picture would be rendered as 2048
> CSS pixels, at double density. So it will look better than if the 2048
> pixel image had been used.
> On Sun, 13 May 2012, Odin Hørthe Omdal wrote:
>> Say if you're in a browser optimizing for low bandwidth usage, and some
>> quality at the cost of speed.  The viewport is 800x600.  In the normal
>> case, the browser would choose hero.jpg because it fits well with its
>> resource algorithm. However, since being in the special mode, it defers
>> the prefetch of the image and waits for layout, where it can see that
>> this picture lies inside a 150px wide box - so it fetches hero-lo.jpg
>> because it doesn't need more.
> I've made sure that the browser has that flexibility.
> On Sun, 13 May 2012, Mathew Marquis wrote:
>> The amount of “developers can never be trusted with this” sentiment I’ve
>> heard from the members of this group is incredibly depressing.
> Agreed that it's depressing. But I don't think it's misplaced.
> It's not all authors. It's sufficient authors that it matters, though.
> On Sun, 13 May 2012, Jason Grigsby wrote:
>> Edward’s original <img srcset> proposal was pretty straight forward, but
>> as it has grown to try to address more use cases, the syntax has become
>> more convoluted[1]. I read the latest proposal multiple times last night
>> and still couldn’t figure out how it would work.
>> [1] http://junkyard.damowmow.com/507
>> It may be that the proposal is written in language that implementors
>> understand and that it needs to be rewritten to make it clearer for
>> authors how it would work. Or it could be an indication that the syntax
>> is too terse and confusing for authors (which is currently the feedback
>> the community group is receiving).
> Oh don't pay any attention to that, that's just a draft (an extract from a
> draft of this very e-mail in fact!) that I was showing some people on IRC
> for a sanity check. This e-mail and the draft extract above that is from
> this e-mail is intended for discussion here amongst the context of this
> thread, it's not intended to be spec text.
> The spec text is both more obtuse and precise (and aimed at implementors)
> and hopefully more understandable (and aimed at authors and tutorial
> writers). Hopefully the text in the spec is thus clearer.
> On Sun, 13 May 2012, Benjamin Hawkes-Lewis wrote:
>> Perhaps changing the syntax to avoid confusion with units might help too:
>>   <img src="a.jpg" alt=""
>>         set="a.jpg 600x200 1x,
>>                 b.jpg 600x200 2x,
>>                 c.jpg 200x200">
> I imagine in most cases the vertical dimension will be omitted, at least
> in sites with Western typography.
> On Sun, 13 May 2012, Bjartur Thorlacius wrote:
>> On 5/13/12, Kornel Lesiński <kornel@geekhood.net> wrote:
>> > I think layout (media queries) and optimisation cases are orthogonal
>> > and it would be a mistake to do both with the same mechanism.
>> My knee-jerk reaction to the above thought is that layout should be done
>> using CSS and any optimizations left up to the UA. A bandwidth
>> constrained UA could request a downsized thumbnail that fits the size of
>> the <object>/<img>/<video poster>/<a> element, or render an
>> appropriately sized bitmap from a SVG.
>> The problem with that, though, is that then bandwidth constraints can't
>> affect layout. Users should be able to configure UAs to use downsized
>> images even given a large viewport, if only to save bandwidth and
>> reserve a larger fraction of the viewport for text columns.
> I'm not sure what the solution for bandwidth should be -- so far I haven't
> seen any clear indication of how to address it. The client doesn't know
> what the bandwidth is like; the server sort of does (since it can know its
> own bandwidth and can therefore deduce if the other end is more
> constrained than it is) but even then it's highly variable and it's not
> clear what to do about it. (See my earlier comments.)
> Leaving the bandwidth issues out of it, I agree with your other comments
> -- the layout issues, and media queries, belong in CSS. What belongs in
> the markup is the issue of different content for different environments.
> The two are of course linked, but not inextricably.
> On Sun, 13 May 2012, Benjamin Hawkes-Lewis wrote:
>> On Sun, May 13, 2012 at 8:55 PM, Bjartur Thorlacius
>> <svartman95@gmail.com> wrote:
>> > But the chosen image resolution might be a factor for choosing layout.
>> Maybe we should think of a way to expose _that_ information to CSS,
>> rather than going in the other direction.
>> <section>
>>   <img src="a.jpg" alt=""
>>        set="a.jpg 600x200 1x,
>>                b.jpg 600x200 2x,
>>                c.jpg 200x200">
>> </section>
>> section { /* generic style rules */ }
>> section! img:intrinsic-aspect-ratio(<2) { /* specific overrides for
>> section when the UA picks the narrow image */ }
> That seems reasonable to me, but we shouldh let the CSSWG address it.
> On Sun, 13 May 2012, Kornel LesiÅ~Dski wrote:
>> For pure bandwidth optimisation on 100dpi displays (rather than avoiding
>> sending too large 200dpi images to users with 100dpi displays) an
>> explicit filesize information may be the solution:
>> <img srcset="q95percent.jpg size=100KB, q30percent.jpg size=20KB">
>> then UA can easily make decision how much bandwidth it can use (e.g. aim
>> to download any page in 5 seconds, so try to get image sizes to add up
>> to less than 5*network B/s).
> That would be an interesting way of giving the user agent the information,
> true (or something similar, e.g. "foo.jpe 240k 100w", for "kilobyte"). But
> that doesn't address the question of how the user agent is supposed to
> know what to do with that information.
> On Mon, 14 May 2012, Anne van Kesteren wrote:
>> On Mon, May 14, 2012 at 10:55 AM, Matthew Wilcox <mail@matthewwilcox.com> wrote:
>> > have any of you seen this proposal for an alternative solution to the
>> > problem?
>> >
>> > http://www.w3.org/community/respimg/2012/05/13/an-alternative-proposition-to-and-srcset-with-wider-scope/
>> >
>> > I like the general idea and from an author perspective this seems
>> > great; but I know nothing of the browser/vendor side of the equation -
>> > is this do-able?
>> Adding a level of indirection is actually not that great as it makes it
>> harder to understand what is going on. Also if you work on sites in
>> teams it's not always a given access to <head> is equal to the templates
>> that are being authored. Let alone full control over how resources are
>> stored.
> On Mon, 14 May 2012, Matthew Wilcox wrote:
>> I'd contest that it is no harder to understand than it is to understand
>> why your CSS behaves differently when a JS element acts on the mark-up.
>> We are used to one stack defining how another acts. We do it all the
>> time. Adding classes to mark-up to control display, or just the cascade
>> on its own does this.
> A lot of authors have huge trouble with class selectors, script
> manipulating the DOM, and the cascade. So that's not necessarily a good
> argument. :-) Indirection has proven quite challenging.
>> Is this harder to understand than <picture> or srcset is what really
>> matters. Anything we do to resolve this resource adaption problem will
>> by necessity complicate things. Is this better than the alternatives?
> Well, srcset doesn't have indirection. That goes a long way towards making
> it simpler!
> On Mon, 14 May 2012, Mathew Marquis wrote:
>> It’s worth noting that a practical polyfill may not be possible when
>> using `img set`, for reasons detailed at length elsewhere:
>> http://www.alistapart.com/articles/responsive-images-how-they-almost-worked-and-what-we-need/
>> http://www.netmagazine.com/features/state-responsive-images
>> Long story short: attempting to write a polyfill for `img set` leaves us
>> in the exact situation we were in while trying to solve the issue of
>> responsive images strictly on the front-end. We would be saddling users
>> with a redundant download—first for the original src, then for the
>> appropriately-sized source if needed.
>> Where the new element would be all but ignored by existing browsers,
>> efficient polyfills become possible. In fact, two `picture` polyfills
>> exist today:
>> http://wiki.whatwg.org/wiki/Adaptive_images#Functional_Polyfills
> As a general rule, the approach we have taken with HTML is to focus on
> what can be backwards-compatible -- what can degrade in legacy UAs --
> while leaving the new features just for new browsers. Certainly, focusing
> on the short-term issue of what can be shimmed and what cannot is not
> optimising for the long term, which is the higher concern.
> --
> Ian Hickson               U+1047E                )\._.,--....,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
Received on Tuesday, 15 May 2012 09:39:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 30 January 2013 18:48:08 GMT