W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2006

RE: ERR header (NEW ISSUE: Drop Content-Location)

From: Henrik Nordstrom <henrik@henriknordstrom.net>
Date: Tue, 05 Dec 2006 07:57:37 +0100
To: Joris Dobbelsteen <Joris@familiedobbelsteen.nl>
Cc: Julian Reschke <julian.reschke@gmx.de>, Henry Story <henry.story@bblfish.net>, ietf-http-wg@w3.org
Message-Id: <1165301857.8462.21.camel@henriknordstrom.net>
sön 2006-12-03 klockan 21:51 +0100 skrev Joris Dobbelsteen:

> I believe we are talking about a technical solution for a problem causes
> by people not being aware of the havoc caused (or are just plain
> ignorant, which is best not to assume).

Same here.

> The implementors screw up, the administrators get the effects. I'm
> considering whether this would get a true result and would people start
> acting, rather than getting so annoyed to build filters protecting
> against it.

Quite likely to happen. Well, actually already happens with filters
being deployed infront of webservers restricting which methods is
allowed to reach the web server.

> My believe is that validation tools (single reference) would resort
> better results, provided that they are used by the development/test/QA
> teams. In such situations the people that develop the product are
> actually caring about compliance, instead of just advertising it as
> such.

agreed.

> This leaves to my last thoughts, how well are browsers able to
> (automatically) detect incorrect behaviour? If a validation tool can do
> it, so can they. It they can't do it, is a validation tool capable to do
> so?

Depends on the error.

404's and similar they can very easily detect, but so does the server
error logs without any additional reporting.

Significant HTML coding errors is easy, and sometimes worth reporting.

Bad document base (which started this discussion) is not so easy if it
goes outside the site. Well, sometimes it's easy i.e. when getting a
authorative "host not found" on the hostname of the base URI. In worst
case a timeout is needed. If it's inside the same site then the existing
error reporting is already sufficient.

> My reasoning, if they were capable, they would have build mechanisms to
> protect themselves against broken systems and make the functionality
> available to others.
> In fact, is it even possible to detect this problem using automated
> systems or would it require a human to find out?

It's possible for many errors. But I still do not think it's worth the
errort and the problems involved in getting the scheme to work.

Quite likely automatic reporting from the browsers is not desireable,
while these are the ones most likely to find problems. This to avoid the
system backfiring on itself when it's the browsers which are broken or
abuse.

This also goes hand in hand with the fact that in most cases the ones
receiving the reports will need quite a bit of additional guidance on
why it's an error and why they should care about it.

And the webmaster@ is already an standards track method of contacting
the human responsible for a web site. Completely ignored by very many,
but still standards track.


In systematic errors the most important step is to properly identify the
source of the error, not the error as such. The ones responsible for the
web sites often does not know the low details, or care very much as long
as it works in the one or two major browsers. Quite often it's a vendor
involved, and by making the vendor aware of the problem it is possible
to get it fixed proper on a longer time scale, either by having the
vendor product fixed, or by having them explaining the problem in
suitable documentation raising awareness of why it is a problem and why
it should be avoided.

Regards
Henrik




Received on Tuesday, 5 December 2006 06:57:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:49:53 GMT