Re: What's wrong with UA sniffing and server side processing?.

There's a lot of discussion going on today around delivering responsive code with minimal delivery overhead. In my opinion, images and javascript (and even HTML fragments) delivery are manageable to negotiate in a feature-based manner without server detection today (while we wait for standards to make things even better). CSS delivery continues to be a problem area, as there're no native mechanisms in HTML for making conditional, feature-based requests to stylesheets (all links in the page source are always fetched today, regardless of media applicability). Still, gzip tends to negate the bulk of css transfer overhead today, again while we wait for better native tools.

In responsive projects, I tend to use UA detection as a last resort or fallback for undetectable features or bugs. For example, run a feature test first, and then if necessary, use UA conditions to blacklist/whitelist browsers known to have falsely passed or failed. This sort of tacit knowledge requires broad device testing to discover (for example, browsers that technically support position:fixed from a device database standpoint, but do so in an unusable way).

Ultimately, device databases store stock information about a device or browser, and that information can't tell us a lot about many of the unique-per-person and per-request conditions that we actually need to care about when delivering responsive designs (screen size vs. actual viewport size, enabled features, user font size preferences that end up determining an applicable layout to deliver, etc etc). And UA based solutions are of course not self-sustaining, unlike feature based qualification.

For device-agnostic http headers, you might check out the Client Hints proposal that is making its way into Chrome, as it shows promise for one day carrying the sort of information we really want to know on the server.

-Scott



On Jun 28, 2013, at 9:46 AM, "Darrel O'Pry" <darrel.opry@imagescale.co> wrote:

> Thanks all for having this information available.  Especially the user-agent-string history. That is priceless. 
> 
> I notice that most of these examples focus on user agent feature detection gone wrong.
> 
> The general reasons for avoiding UA sniffing seem to be...
> 
> 1) UAs are loosely defined and browsers readily copy each others UA strings, this is rooted in UA based content delivery practices in the early Mosaic, Netscape, IE day as per, http://webaim.org/blog/user-agent-string-history/.  
> 2) They're easily spoofed.
> 3) Historically a number of bugs have arisen resulting from poor UA parsing in client side javascript.
> 4) We should be writing one size fits all html, as per http://css-tricks.com/browser-detection-is-bad/.
> 
> I'm asking about User Agent detection specifically because I'm currently working on server side device detection and UA strings seem to be the most effective tool in combination with WURFL or DeviceAtlas.
> 
> I'd temper what seem to be current positions with the following...
> 
> 1) Standards bodies should realize that there are some valid use cases for User Agent based content delivery and device detection, and should try to clean up the User-Agent header implementation or supercede it with a stricter format or additional headers that express features and capabilities. (Standards are slow, don't hold your breath)
> 2) Spoofing could be valuable as it does provide a way, be it a hack, that the end user and user agent can control how their capabilities are represented.
> 3) Bugs Happen, Change happens, Code needs to evolve with it's environment. 
> 4) In light of responsive design, this maybe somewhat outmoded thinking. We're still trying to re-use as much design as possible for all devices, but we're also trying to provide the best experience on every device which means a one size fits all philosophy might not be as valid in the contemporary device market.
> 
> 
> In general I'm a proponent of a combination of client side and server side technologies. Picturefill and SrcSet offer a mechanisms that satisfy most of the needs of responsive design, however they require that existing HTML be changed to support their implementation. In HTTP there are existing specifications for server driven content negotiation (my preference due to reduced number of requests), User-Agent is notably one of the content negotiation headers. I currently lean to an approach where picturefill or media queries are used for art direction choosing an appropriate crop of an image for a specific viewport, and the server is responsible for re-sampling an image to different display sizes and densities.
> 
> 
> 
> On Fri, Jun 28, 2013 at 7:22 AM, Marcos Caceres <w3c@marcosc.com> wrote:
> 
> 
> 
> On Friday, June 28, 2013 at 12:08 PM, Jitendra Vyas wrote:
> 
> > http://css-tricks.com/browser-detection-is-bad/
> >
> 
> Which of course, links to the classic:
> http://webaim.org/blog/user-agent-string-history/
> 
> 
> 
> 
> 
> 
> -- 
> Darrel O'Pry
> The Spry Group, LLC.
> http://www.spry-group.com
> 718-355-9767 x101

Received on Friday, 28 June 2013 14:31:06 UTC