W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Straw-man for our next charter

From: Mark Nottingham <mnot@mnot.net>
Date: Sun, 29 Jul 2012 12:25:48 -0700
Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <C1CCC1BC-654D-49C4-9E9E-708D0458DBA4@mnot.net>
To: Larry Masinter <masinter@adobe.com>

On 27/07/2012, at 9:34 PM, Larry Masinter <masinter@adobe.com> wrote:

> I'd suggest the HTTP/2.0 work would be best served by first focusing on documenting "substantial and measurable improvement".
> I suggest the charter not be extended to accept proposals and start on development of the protocol itself until this is done.
> 
> There are conflicting characterizations of SPDY performance results (the results aren't in conflict as much as whether you could summarize the results as "better"). 
>    http://www.ietf.org/proceedings/83/slides/slides-83-httpbis-3.pdf  said  "Google recently reported that SPDY over SSL is now faster than HTTP without SSL" plus "BoostEdge paper confirms Google numbers", which I'd take to mean that it's always faster.
> 
>   but  http://www.guypo.com/technical/not-as-spdy-as-you-thought/  it isn't as SPDY as you (Someone) thought, and noted many circumstances where it didn't help.
> 
> Unless there's agreement on what "improvement" is, it's hard to even discuss features for how well they "improve".

The short answer is that we'll know that it's good enough when we have rough consensus that it's good enough. 

Now, if someone wants to propose a set of measurements that are openly defined with an open implementation that anyone can run repeatably, we can try to get consensus around including those as exit criteria. 

However, no one has yet produced them, and I'm wary of doing that kind of work post-charter; we'll get caught up in an endless debate.

Hence, rough consensus.


> What can site administrators depend on, expect from a SPDY upgrade? 
> 
> I'm still concerned about head-of-line blocking that comes from multiplexing and doing flow control at two levels of the protocol stack. It seems intrinsically a likely problem, for which the only solution is proper management and control of performance of all of the sources of data ("scheduling").  I think getting an agreement on why that isn't a problem here (or at least, how to minimize the problem) first would help, too.

Proving a negative is notoriously difficult; can you prove it *is* a problem? I think this needs discussion, but the easiest way to move it forward is to demonstrate the issue.

Regards,
 

--
Mark Nottingham
http://www.mnot.net/
Received on Sunday, 29 July 2012 19:26:11 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Sunday, 29 July 2012 19:26:16 GMT