W3C home > Mailing lists > Public > public-respimg@w3.org > June 2013

RE: What's wrong with UA sniffing and server side processing?.

From: Tom Maslen <tom.maslen@bbc.co.uk>
Date: Fri, 28 Jun 2013 15:21:20 +0100
Message-ID: <E64A28D8321E7446B2639224FBA38DC5017E2ABF@bbcxues31.national.core.bbc.co.uk>
To: "Darrel O'Pry" <darrel.opry@imagescale.co>, Marcos Caceres <w3c@marcosc.com>
CC: Jitendra Vyas <jitendra.web@gmail.com>, <public-respimg@w3.org>
The only thing UA sniffing is going to give you in regards to responsive images is the pixel density of the screen, it isn't going to tell you how wide the image needs to be.

UA sniffing isn't all bad, but people need to be aware of the issues around it.  I think UA sniffing is not useful for putting all devices into a specific class of devices, as the definitions of classes is blurry (What's a Chrome Pixel?), people bring prejudices with them (i.e. people giving mobile devices a 50kb alternative of a 1mb desktop webpage), and its a constant ongoing battle to keep your device list up to date.

I'd say UA sniffing is very useful if you want to whitelist browsers, so for example to define a group of devices called "legacy IE", that only a certain number of devices fall into and no new devices ever will, while the rest of your devices are served something else.

Sorry for going slightly off topic.

/t

Tom Maslen
Tech Lead
BBC News Visual Journalism



-----Original Message-----
From: Darrel O'Pry [mailto:darrel.opry@imagescale.co]
Sent: Fri 6/28/2013 2:46 PM
To: Marcos Caceres
Cc: Jitendra Vyas; public-respimg@w3.org
Subject: Re: What's wrong with UA sniffing and server side processing?.
 
Thanks all for having this information available.  Especially the
user-agent-string history. That is priceless.

I notice that most of these examples focus on user agent feature detection
gone wrong.

The general reasons for avoiding UA sniffing seem to be...

1) UAs are loosely defined and browsers readily copy each others UA
strings, this is rooted in UA based content delivery practices in the early
Mosaic, Netscape, IE day as per,
http://webaim.org/blog/user-agent-string-history/.
2) They're easily spoofed.
3) Historically a number of bugs have arisen resulting from poor UA parsing
in client side javascript.
4) We should be writing one size fits all html, as per
http://css-tricks.com/browser-detection-is-bad/.

I'm asking about User Agent detection specifically because I'm currently
working on server side device detection and UA strings seem to be the most
effective tool in combination with WURFL or DeviceAtlas.

I'd temper what seem to be current positions with the following...

1) Standards bodies should realize that there are some valid use cases for
User Agent based content delivery and device detection, and should try to
clean up the User-Agent header implementation or supercede it with a
stricter format or additional headers that express features and
capabilities. (Standards are slow, don't hold your breath)
2) Spoofing could be valuable as it does provide a way, be it a hack, that
the end user and user agent can control how their capabilities are
represented.
3) Bugs Happen, Change happens, Code needs to evolve with it's environment.
4) In light of responsive design, this maybe somewhat outmoded thinking.
We're still trying to re-use as much design as possible for all
devices, but we're also trying to provide the best experience on every
device which means a one size fits all philosophy might not be as valid in
the contemporary device market.


In general I'm a proponent of a combination of client side and server side
technologies. Picturefill and SrcSet offer a mechanisms that satisfy most
of the needs of responsive design, however they require that existing HTML
be changed to support their implementation. In HTTP there are existing
specifications for server driven content negotiation (my preference due to
reduced number of requests), User-Agent is notably one of the content
negotiation headers. I currently lean to an approach where picturefill or
media queries are used for art direction choosing an appropriate crop of an
image for a specific viewport, and the server is responsible for
re-sampling an image to different display sizes and densities.



On Fri, Jun 28, 2013 at 7:22 AM, Marcos Caceres <w3c@marcosc.com> wrote:

>
>
>
> On Friday, June 28, 2013 at 12:08 PM, Jitendra Vyas wrote:
>
> > http://css-tricks.com/browser-detection-is-bad/
> >
>
> Which of course, links to the classic:
> http://webaim.org/blog/user-agent-string-history/
>
>
>
>


-- 
Darrel O'Pry
The Spry Group, LLC.
http://www.spry-group.com
718-355-9767 x101


http://www.bbc.co.uk/
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.
					

http://www.bbc.co.uk/
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.
					
Received on Sunday, 30 June 2013 21:42:07 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:06:09 UTC