- From: Jo Rabin <jrabin@mtld.mobi>
- Date: Thu, 02 Oct 2008 12:41:47 +0100
- To: Rotan Hanrahan <rotan.hanrahan@mobileaware.com>
- CC: public-bpwg-ct <public-bpwg-ct@w3.org>
Thanks Rotan Yes, interesting point about the type of site. Given that the types of site you mention are likely to have specific mobile tailored interfaces you'd especially want them not to be transformed by an in-network proxy and so if a transforming proxy "remembers" the sites that return 406 or 200 with "browser not supported" I guess that they'd have to remember the precise user agent that was not supported, rather than making the blanket assumption that the site was looking to see a desktop user agent string. Hmmm, I can see some more editorial work coming up to make that point clear. Jo On 02/10/2008 12:31, Rotan Hanrahan wrote: > Hi Jo, > > Certainly. It's good to get a sense of the true picture in the Web regarding how user agents are handled. Regarding motivation b) such a response might be given in some security and access-control circumstances, and in the mobile space you may find it when you try to download incompatible ringtones/apps. (Of course, you shouldn't even be offered incompatible material.) Eliminating the security and resource download use cases, you're left with the ordinary "pages", presumably with the usual HTML, XHTML, cHTML, MP, WML etc formats. I can't recall any details regarding the prevalence of 406 errors when surfing for these ordinary pages, so the research from Francois in this area would be interesting. > > What I'm suggesting is that the figures you get on 406 errors can be influenced by the types of site you include in the survey. If you include a higher percentage of sites that offer ringtones, wallpaper, mp3, apps etc, then you might have a significantly higher occurrence than if you just surfed at random. > > ---Rotan. > > -----Original Message----- > From: Jo Rabin [mailto:jrabin@mtld.mobi] > Sent: 02 October 2008 12:04 > To: Rotan Hanrahan > Cc: public-bpwg-ct > Subject: Re: Browsing the Web with a non-existing User-Agent > > Hi Rotan > > There are a couple of different things we are trying to establish. a) > How many sites provide a different user experience based upon the > content of the User-Agent, and b) How many sites respond with "Your > browser is not supported" when faced with a User-Agent they don't > recognise, rather than an unacceptable Accept configuration. > > We want to know b) because if the number is vanishingly small then there > is strong reason for the CT Guidelines to say *never* change the User > Agent string, but it is permissible to change the Accept(-*) headers to > avoid this kind of 406 response. > > If it is not true then there is strong justification for saying "try > with unaltered User Agent first, then try with a vanilla one", which is > what the CT Guidelines draft says at present, and which several Last > Call comments have pointed to. > > I'm aware of a number of sites that reject requests from IE and a few > that reject requests because the browser says it is not IE. > > On a) we want to know that, well, for pleasure, really. To know that the > mantra of custom sites for different types of devices is making headway. > The really important thing at the moment is to understand how prevalent > 406 because of User Agent is ... > > All best > Jo > > > > On 02/10/2008 10:07, Rotan Hanrahan wrote: >> Part of the reason for differentiating on User Agent rather than Accept is (as most adaptation solution providers know) you can't always trust the Accept header. The other part is that the Accept header doesn't tell you much about how the agent will present the content. A clue to that is the User Agent header, with which you can look up a repository of previously recorded device information. And if your repository is packed with device information, you might as well add in the details of what content types the device supports. So the Accept header becomes redundant. >> >> Of course, a good adaptation mechanism should be able to deal with completely unknown devices based solely on the Accept header, to at least deliver a "functional user experience". Thus, from the POV of a client designer, there is good reason to include the User Agent and Accept header in the request. If only they'd stop saying they accept "*/*"! >> >> Our MIS adaptation technology, and that of other professional solutions, will gracefully degrade its response as the device evidence is constrained. Sites that use such technology will therefore not break when Francois arrives with his crazy browser configurations. But, as Jo says, there are plenty out there using some home-brew or less-than-adequate solutions that can give unacceptable user experiences in these circumstances. Such circumstances are not always contrived, because we regularly observe unusual user agent behaviour with new devices on the market. The nature of our MIS device handling process ensures that users of new devices will still get a good experience, and we get a little bit of time to make that experience perfect for the next product release/update. >> >> It would be interesting to see a report from Francois summarising the results of his survey, assuming the sample size is high enough. Though I'd advise making the tested sites anonymous, as they probably could do without the bad publicity :) >> >> ---Rotan. >> >> -----Original Message----- >> From: public-bpwg-ct-request@w3.org [mailto:public-bpwg-ct-request@w3.org] On Behalf Of Jo Rabin >> Sent: 01 October 2008 20:10 >> To: Francois Daoust >> Cc: public-bpwg-ct >> Subject: Re: Browsing the Web with a non-existing User-Agent >> >> >> If you don't supply a User-Agent at all a lot of sites break, according >> to some stuff I did a while ago. >> >> But yes, this is at the heart of what we are trying to establish. If, as >> a Content Provider, you do differentiate on User Agent and not Accept >> then that's interesting and that's what we are in the game to promote, I >> think. I'm sorry that it's not more prevalent in your sample, Francois. >> >> Jo >> >> On 01/10/2008 15:53, Francois Daoust wrote: >>> I've been masquerading my User-Agent header lately to browse the Web, >>> using a non-existing User-Agent with no link whatsoever to any existing >>> one. >>> >>> I was expecting to see things break one way or the other, but the thing >>> is I had no real problem so far. >>> I see a few sites that return an "application/vnd.wap.xhtml+xml" >>> content-type that is not recognized by my browser, but this typically is >>> an indication that they have a mobile-optimized version, so not what I >>> would consider to be a big problem. >>> >>> So I'm wondering. Can anyone point out a few web sites that returns a >>> rejected response when queried with a "weird" User-Agent? (either >>> through a 406 status, or through a 200 status code with a "sorry" >>> message) I suppose I'm only browsing modern Web sites, not "legacy" ones. >>> >>> Thanks, >>> Francois. >>>
Received on Thursday, 2 October 2008 11:42:45 UTC