- From: Robert Collins <robertc@robertcollins.net>
- Date: Tue, 30 Sep 2014 10:17:14 +1300
- To: HTTP Working Group <ietf-http-wg@w3.org>
When reading the websockets draft, it occured to me that perhaps I misunderstand SETTINGS. AIUI it is peer to peer, not end to end. That is: Client<->intermediary<->server. A B C settings are exchanged between A and B, and B and C, but never A and C, and settings from C are not propogated to A. Further, because A doesn't know for any particular stream whether it will traverse BC or perhaps BD (were D might be a load balanced secondary IP for the same origin as C, or be a new ALT-SVC being warmed up), there is no way in the HTTP/2 model to talk about the BC/BD capabilities to A. Case in point, if C is capable of e.g. websockets, and so is B, we don't know if D is. So B can't hide its websocket readiness until it sees C is ready too - thats conflating stream capabilities with peer capabilities? Perhaps in section 6.5, after "SETTINGS parameters are not negotiated; they describe characteristics of the sending peer, which are used by the receiving peer. Different values for the same parameter can be advertised by each peer. For example, a client might set a high initial flow control window, whereas a server might set a lower value to conserve resources." "SETTINGS frames MUST NOT be forwarded - they are solely used to negotiate the characteristics of a single connection between two endpoints. It also raises questions in my mind about SETTINGS_ENABLE_PUSH Given Client1 Client2 Proxy Server If client1 has SETTINGS_ENABLE_PUSH=1, and client2 has SETTINGS_ENABLE_PUSH=0, the Proxy has SETTINGS_ENABLE_PUSH=1, then the server may push resources for both clients, and the proxy will have to cancel streams when they are for client2 ? That is surely doable but seems a little wasteful. I wonder if we can do better? -Rob -- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
Received on Monday, 29 September 2014 21:17:43 UTC