Re: What's wrong with UA sniffing and server side processing?.

> > I can understand the need to avoid a server-side solution for things
> like epub but I wonder how much the desire to avoid a server-side approach
> is limiting the possibilities for a solution (not that I've got an magic
> answers there).
>
> We are certainly not avoiding it. There are actually a few people who run
> CDNs in the group and we've tried to make sure we accommodate their needs
> (e.g., http://www.cdnconnect.com/). Adam Bradley can probably speak to
> this (hopefully in positive terms! :) ).
>


One constant I've made with my server-side solution is to restrict any use
of user-agent sniffing what-so-ever. Once you go down the path of
responding to requests depending on the user-agent, you immediately loose
the benefits of a content delivery network. Even after normalizing
user-agent strings there are too many variations out in the wild which make
caching static images on the edge worth while. Additionally, each server
out there would need to maintain a database of each browser and each of
their version's capabilities.

In my eyes a server-side solution still needs to ultimately generate a
response that can be heavily cached at all levels. My solution focuses on
making it easier to create all of the different image variations
dynamically, but the end result is still a static image which can be cached
on the browser, proxies, edge, etc. Our image breakpoint feature allows
users to maintain just the source image, but easily create all of the
variations they need:
http://www.cdnconnect.com/docs/image-breakpoints/overview

I recommend that web authors need to find a process which only requires
them to maintain the source image (to include maintaining just the
photoshop or illustrator file:
http://www.cdnconnect.com/docs/image-api/output-format), and let the
machines handle the repeatable work via automation:
http://www.netmagazine.com/news/dev-argues-kill-save-web-132668

However, we're still able to do feature detection using Chrome's new Accept
header information. Instead of checking for what the browser is and then
having the logic to know if a browser supports X, we simply let the browser
tell us what it can do through the "Accept" header. For example, we're able
to automatically encode images to webp:
http://blog.netdna.com/developer/how-to-reduce-image-size-with-webp-automagically/

Server-side solutions should essentially be automation tools to resize,
crop, encode and optimize. However, in my opinion the server should not be
housing the  "Well if its Opera 11.5 and higher then it supports WebP" type
of logic, or "if its iPhone 4 and up, or an iPad 3 and up, then serve the
retina image" type of logic. This is why I believe the client hint proposal
(https://github.com/igrigorik/http-client-hints) is great step forward so
the web can build better tools to automate our processes, reduce requests,
and optimize images for the device. Additionally, all of what I mentioned
above plays nicely with any client-side solution, whether its <picture>,
@srcset or whatever custom JS hackery used as a responsive images solution.

-Adam Bradley

Received on Sunday, 30 June 2013 21:42:04 UTC