Re: Declarative HTTP Spec Test Suite

On Mon, May 27, 2024 at 11:13:05PM +0000, Mohammed Al Sahaf wrote:
> > It is not really in curl's interest to reach a highscore in a compliance test
> > if that means that it interoperates less good. curl is not a HTTP compliance
> > meter, it is an internet transfer tool and library.
> > 
> > This said, we of course want to comply as far as possible, but whenever there
> > is a fork in the road, the decision might not always be to go with the
> > strictest language in the most recent RFC. I'm also sure we can find downright
> > bugs or just protocol silliness even in curl's implementation.
> >
> > Also, as has been discussed numerous times: the HTTP RFCs mostly describe how
> > things should work and how to behave, not how to act when the other side does
> > the wrong thing or how to fail etc.
> 
> Clear, and that's sensible and pragmatic. The goal is to find where the
> implementations disagree, identify the gaps, and perhaps retrofit the fixes
> into the protocol definition. I believe it'll be valuable to know the
> situations of "This is where X differs from {the rest, RFC}". Maybe it's a
> bug in the implementation. Maybe it's not an oversight, rather the devs have
> a good reason for sidestepping that part of the spec. Maybe it's a bug in the
> spec. What I hope to achieve with this is to shine a spotlight on the various
> implementations to find those dark corners.

In practice, for most of the multi-decades old implementations, there are
users who take a long time to fix their local components, and the non-
compliance is often dictated by usage. A lot of the rules in the specs
come from feedback from such implementations who are at least seeking a
new rule in the spec to have something to present the next time a user
insists that the tool is wrong (because that happens a lot). It can take
from 5 to 10 years to deprecate some non-compliant behavior, and I'm sure
that most of the derivations from the spec are caused by one user's request
for being less strict regarding their old application.

There's indeed the rest, bug and/or accidental non-compliance, but we're
all extremely cautious when trying to fix that because whatever works by
accident might be a feature for some users. For example, I'm pretty sure
that if you find a non-compliant behavior in haproxy and one in varnish
where both disagree, it will be very hard to reconciliate them because
each one will either have a different history justifying its existence,
or some suspicion that some user might possibly rely on it.

There are lots of things that should "just work" but they're based on
common sense and not necessarily explicitly written in the spec, because
they derive from more general rules. Generally what should "just work"
does work fine for common components that have been exposed to the net
for a decade or more. And there's limited willingness to break what is
not compliant but used to work. So it seems to me that the value for
the working group would be limited anyway, it's only up to each
implementation to decide how to act on non-compliant behaviors.

With that said, having a common collection of tests to run on compatible
implementations can be useful, it will simply not be easy to adopt by
everyone. Also, testing clients is very difficult because contrary to
servers which just have to respond to sollicitations, someone has to act
on the client to run the desired tests, so the approach is different
(and different between various clients), and I'm not convinced that a
same test collection would work for all implementations due to this.

Regards,
Willy

Received on Tuesday, 28 May 2024 03:50:29 UTC