Rob,
But the server I write is used both for blogs/emails and for banking. I
don't know what level of flexibility the applications running on top of the
server want.
I appreciate that whatwg are trying to bring some common ground to how
things fail as well as to how they work. That is a great goal. But
unfortunately their approach is to have the best practices they develop put
into a non-versioned living document that is not universally adopted by the
browsers anyway. This might reflect the reality of how they are
continuously developing their browsers, but it is a really poor way to
communicate to the rest of the industry in a way that encourages testable
interoperability. I have no idea how to incorporate that into a sensible
release strategy for a server that is meant to work with a broad range of
browser implementations and versions of those implementations.
Why can't they take an approach that every year (or two), they take the
best practices they have developed in their living document, and use that
to author a RFC on the subject of how to handle invalid URLs, that
obsoletes the previous RFC on that subject? This would well communicate
the collective behaviour of browsers to the rest of the industry.
I'm very happy to update our invalid URL handling, but give me versioned
spec to do so, not a moving target!
cheers
On Wed, 9 Oct 2024 at 08:35, Rob Sayre <sayrer@gmail.com> wrote:
> Carsten Bormann <cabo@tzi.org> wrote:
> >> And slowly things fall apart.
> >
> > That is exactly the phenomenon that is called “Protocol Decay” in RFC
> 9413.
>
> I don't think that's quite what's meant here. I think the idea is that any
> byte sequence has a result, and there are no errors on a syntactic level.
> There can still be 404s and whatnot, but it's meant to be predictable. So,
> using the WHAT WG approach, you get a result no matter what, for the most
> part.
>
> It's not an approach I would use for banking, but it's OK for blogs and
> emails.
>
> This choice isn't binary, either. I think most web things try to interpret
> bogus content in the same way (thus it is not bogus...), but they do have
> limits.
>
> thanks,
> Rob
>
>
--
Greg Wilkins <gregw@webtide.com> CTO http://webtide.com