On Tue, Jan 19, 2016 at 12:37 PM, youenn fablet <youennf@gmail.com> wrote:
> One question that is asked here is whether it is fine to remove the
> support of multiple not-combinable headers from the fetch Headers API, both
> read and write side.
>
Yes, sorry if that wasn't clearer in how I phrased the question. I can't
think of any API reason to support multiple non-combinable headers (e.g.
multiple headers that don't support the #rule format), with the only
exception being the Set-Cookie bit.
> That would mean that a web app could add multiple headers with the same
> name only if it is fine if they can be combined by the browser. Whether or
> not putting requirements on how browsers should serialize these headers is
> out of scope here.
>
Is the assumption that all new headers adhere to the #rule syntax, unless
otherwise blacklisted? That's certainly the approach Chrome has taken - see
https://code.google.com/p/chromium/codesearch#chromium/src/net/http/http_util.cc&l=393
> Another sub-point is that, as it currently stands, the fetch Headers API
> could potentially make the buggy "Location" header behavior more common.
> Headers::get is indeed returning the value of the first header with a
> given name.
> From what Mozilla is doing and also how WebKit is processing headers
> internally, having a get-as-combined returning a string and a
> get-as-uncombined returning a vector of strings might be handy.
>
Agreed.
As for Chrome, this is what Chrome's internal (C++) representation does.
That is, the setter automatically coalesces them into a comma-separated
list (presuming #rule syntax). The getter is more complex - we have two
getters, one which returns all coalesced and the other allowing iteration
through each instance of the header, although the recommended form is using
the all-coalesced version UNLESS the caller is specifically attempting to
validate that multiple instances have not occurred (e.g.
Strict-Transport-Security)