RE: ERR header (NEW ISSUE: Drop Content-Location)

sön 2006-12-03 klockan 21:51 +0100 skrev Joris Dobbelsteen:

> I believe we are talking about a technical solution for a problem causes
> by people not being aware of the havoc caused (or are just plain
> ignorant, which is best not to assume).

Same here.

> The implementors screw up, the administrators get the effects. I'm
> considering whether this would get a true result and would people start
> acting, rather than getting so annoyed to build filters protecting
> against it.

Quite likely to happen. Well, actually already happens with filters
being deployed infront of webservers restricting which methods is
allowed to reach the web server.

> My believe is that validation tools (single reference) would resort
> better results, provided that they are used by the development/test/QA
> teams. In such situations the people that develop the product are
> actually caring about compliance, instead of just advertising it as
> such.

agreed.

> This leaves to my last thoughts, how well are browsers able to
> (automatically) detect incorrect behaviour? If a validation tool can do
> it, so can they. It they can't do it, is a validation tool capable to do
> so?

Depends on the error.

404's and similar they can very easily detect, but so does the server
error logs without any additional reporting.

Significant HTML coding errors is easy, and sometimes worth reporting.

Bad document base (which started this discussion) is not so easy if it
goes outside the site. Well, sometimes it's easy i.e. when getting a
authorative "host not found" on the hostname of the base URI. In worst
case a timeout is needed. If it's inside the same site then the existing
error reporting is already sufficient.

> My reasoning, if they were capable, they would have build mechanisms to
> protect themselves against broken systems and make the functionality
> available to others.
> In fact, is it even possible to detect this problem using automated
> systems or would it require a human to find out?

It's possible for many errors. But I still do not think it's worth the
errort and the problems involved in getting the scheme to work.

Quite likely automatic reporting from the browsers is not desireable,
while these are the ones most likely to find problems. This to avoid the
system backfiring on itself when it's the browsers which are broken or
abuse.

This also goes hand in hand with the fact that in most cases the ones
receiving the reports will need quite a bit of additional guidance on
why it's an error and why they should care about it.

And the webmaster@ is already an standards track method of contacting
the human responsible for a web site. Completely ignored by very many,
but still standards track.


In systematic errors the most important step is to properly identify the
source of the error, not the error as such. The ones responsible for the
web sites often does not know the low details, or care very much as long
as it works in the one or two major browsers. Quite often it's a vendor
involved, and by making the vendor aware of the problem it is possible
to get it fixed proper on a longer time scale, either by having the
vendor product fixed, or by having them explaining the problem in
suitable documentation raising awareness of why it is a problem and why
it should be avoided.

Regards
Henrik

Received on Tuesday, 5 December 2006 06:57:58 UTC