- From: Anne van Kesteren <annevk@opera.com>
- Date: Fri, 05 Feb 2010 17:22:12 +0100
- To: "Julian Reschke" <julian.reschke@gmx.de>
- Cc: "HTTP Working Group" <ietf-http-wg@w3.org>, "Mark Nottingham" <mnot@mnot.net>
On Fri, 05 Feb 2010 17:17:03 +0100, Julian Reschke <julian.reschke@gmx.de> wrote: > Anne van Kesteren wrote: >> On Fri, 05 Feb 2010 16:59:43 +0100, Julian Reschke >> <julian.reschke@gmx.de> wrote: >>> Anne van Kesteren wrote: >>>> Do many use a generic parser? >>> >>> That's a good question. >>> >>> Even if it's not the case today it would be cool if it could be done >>> at least for new stuff. >> Why exactly? > > So that you don't need to come up with a new parser for each of them? > > (Am I missing something here?) Well, I don't really see the drawback in allowing more bytes by default. It seems that you always need a specific parser at some point except for headers that take fixed token values, but for those being more lenient is not an issue. Therefore I was wondering whether a concept of generic parser is even used/needed in implementations today. Or maybe they have such a concept, but it already is far more lenient so it can also cope with e.g. Link and Cookie-related headers. And maybe Authorization? And custom set headers through setRequestHeader() per chance? Should setRequestHeader() impose less strict requirements than it does now? -- Anne van Kesteren http://annevankesteren.nl/
Received on Friday, 5 February 2010 16:22:51 UTC