Re: What's wrong with UA sniffing and server side processing?.

On Friday, June 28, 2013 at 2:46 PM, Darrel O'Pry wrote:

> Thanks all for having this information available. Especially the user-agent-string history. That is priceless.  
>  
> I notice that most of these examples focus on user agent feature detection gone wrong.  
>  
> The general reasons for avoiding UA sniffing seem to be...
>  
> 1) UAs are loosely defined and browsers readily copy each others UA strings, this is rooted in UA based content delivery practices in the early Mosaic, Netscape, IE day as per, http://webaim.org/blog/user-agent-string-history/.  
> 2) They're easily spoofed.
> 3) Historically a number of bugs have arisen resulting from poor UA parsing in client side javascript.
> 4) We should be writing one size fits all html, as per http://css-tricks.com/browser-detection-is-bad/.
>  
> I'm asking about User Agent detection specifically because I'm currently working on server side device detection and UA strings seem to be the most effective tool in combination with WURFL or DeviceAtlas.  
>  
> I'd temper what seem to be current positions with the following...
>  
> 1) Standards bodies should realize that there are some valid use cases for User Agent based content delivery and device detection, and should try to clean up the User-Agent header implementation or supercede it with a stricter format  
Efforts to standardize them have mostly failed.  I think OMA tried to, but I can't find the link to the spec.  
> or additional headers that express features and capabilities. (Standards are slow, don't hold your breath)

See:  
https://github.com/igrigorik/http-client-hints

As one thing currently being explored.  

> 2) Spoofing could be valuable as it does provide a way, be it a hack, that the end user and user agent can control how their capabilities are represented.
Yes, it's actually quite useful in Safari for instance. But it's still considered "bad" for the reasons already cited in the linked documents.      
> 3) Bugs Happen, Change happens, Code needs to evolve with it's environment.

Sure, but see the first link I sent about IE. It only ended up making things worst.   
>  
> 4) In light of responsive design, this maybe somewhat outmoded thinking. We're still trying to re-use as much design as possible for all devices, but we're also trying to provide the best experience on every device which means a one size fits all philosophy might not be as valid in the contemporary device market.

Yeah… this is the "challenge" for the Web. Mat Marquis gave a really good talk about this recently. He might be able to provide a link.  
> In general I'm a proponent of a combination of client side and server side technologies. Picturefill and SrcSet offer a mechanisms that satisfy most of the needs of responsive design, however they require that existing HTML be changed to support their implementation.  

I assume by "picturfill" you mean <picture>.   
> In HTTP there are existing specifications for server driven content negotiation (my preference due to reduced number of requests), User-Agent is notably one of the content negotiation headers.

My understanding is that HTTP-based content negotiation has more or less been acknowledged as a failure on the Web. See:
http://wiki.whatwg.org/wiki/Why_not_conneg  
> I currently lean to an approach where picturefill or media queries are used for art direction choosing an appropriate crop of an image for a specific viewport, and the server is responsible for re-sampling an image to different display sizes and densities.

If I've understood correct, I think most people in the group would generally agree with the above - though we are mindful that a solution can't exclusively require server side processing (though it must absolutely enable it). It's how we landed at <picture> and why the whatwg landed at srcset.  

Received on Friday, 28 June 2013 15:10:25 UTC