W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Header Size? Was: Our Schedule

From: Greg Wilkins <gregw@intalio.com>
Date: Wed, 28 May 2014 11:53:54 +0200
Message-ID: <CAH_y2NGQjAneTMGQdR2bBniA=KOdXdos06TKKO_zhJLv4-T05w@mail.gmail.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
On 27 May 2014 16:49, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> wrote:

>
>> Are there any uses **in practice on the public internet** for headers
>> longer than 64K as serialized by HTTP/1.1?
>>
>>
>  ‚ÄčI don't have actual statistics of length of the header.  But server
> software have its own limit of header buffers.
> For example, HAproxy uses 8K buffer for entire request headers by default
> and its manual says it is not recommended to change the value, so limit is
> somewhere near it?
>

Jetty also imposes an 8k maximum header size on requests and responses.

I've spent the past few days trying really hard to come up with some
constructive proposals to fix some of the aspects of the draft that I'm
concerned about.  However, all the issues I'm concerned about appear to be
well justified if once accepts the need for Infinite Streaming Headers
Encoded With Highly Order Dependent Compression With Shared State
(ISHEWHODCWSS).     So the good news is that the current draft appears to
be very internally consistent and does cover a lot of corner cases. Credit
due!

But this is also a negative as the design decisions are all tightly coupled
and there exists a lot of corner cases.  Most of the coupling, if you
follow the chain, leads back to ISHEWHODCWSS.

Which is why this is so daft! We are tying ourselves in knots to implement
the complexities represented by CONTINUATION frames (and the payload they
can carry), when in practise, most clients and servers will never ever see
a CONTINUATION frame in the wild.   A 16k frame of even moderately
compressed headers far exceeds the memory commitments that most servers
will be prepared to commit to a stream.  While I accept that future
applications are likely to require more and more meta-data, there is no
reason that we should make the transfer meta data channel available for
such transfers and such applications are perfectly able to use streams,
priorities or structure within data frames to accomplish high volume meta
data.

The only time we are going to see continuation frames is in testing.
Perhaps some applications that do not have to work over the public network
will be able to stream large headers, but for the general case they wont
exist.

I really think support for ISHEWHODCWSS  does go beyond the **in practice
on the public internet** and if we just dropped CONTINUATION frames from
the draft we would be so much better off:

+ reduce the complexity of end_stream handling and the resulting state
machine.
+ No infinite data channel that allows user data to evades flow control.
+ No DOS attack possible on shared connections by sending a HEADER without
END_OF_HEADERS but no following CONTINUATION
+ Without the possibility of stretching a HPACK header set over two frames,
there are no HPACK ordering problems and thus no constraints on
interleaving.
+ No need for servers to wait for last frame before knowing common header
field (eg method).

I'm even considering making the first jetty implementation respond with
size errors if it ever sees a CONTINUATION frame.  Even if the incomplete
HEADER frame is small, it represents a reservation of server resources that
I don't know we want to commit to, as the following CONTINUATION frame may
be delayed or may never come!  It would probably fail all interoperability
tests, but work perfectly well in the wild.   That says something about
wasted effort!

The requirement for ISHEWHODCWSS is something that needs to be included in
the FAQ and well explained, as it is responsible for a lot of
dissatisfaction I see with the draft.

regards









-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Wednesday, 28 May 2014 09:54:23 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC