Re: Should Web Services be served by a different HTTP n+1?

Sorry to add to what I was saying:

The worst part is the high latency, especially given TCP's current
cogestion avoidance implementations-- the total number of round-trips ends
up dominating latency, regardless of how much bandwidth one has.

-=R


On Thu, Jan 24, 2013 at 2:46 PM, Roberto Peon <grmocg@gmail.com> wrote:

> Well, the web has changed a lot since its inception when pages were mostly
> a single resource. Now, we're nearly fetching a 100 resources per page, and
> myriad people make money from it.
> I think that the decision to ignore compression in the beginning was a
> correct one. Now, however, the headers are causing more than acceptable
> latency impediments to page loads, especially on the devices which are
> becoming the most prevalent (mobile devices with relatively high latency,
> and low bandwidth).
>
> I'm still confused about how what has been proposed would force
> WebServices to fork.
> Placing myself in your and Nico's imagined shoes, it still sounds like
> forking would take more effort than taking someone's compression library
> and hooking it in as appropriate, especially given that the overall memory
> and CPU cost of doing so would be between negligible and zero.
>
> Are you worried about devices with extremely limited code-space/memory?
>
> -=R
>
>
> On Thu, Jan 24, 2013 at 2:36 PM, Phillip Hallam-Baker <hallam@gmail.com>wrote:
>
>> So don't do header compression, do a tokenization approach that does not
>> cause Web Services to fork.
>>
>> If Web Services can't use 2.0 then you have forked the protocol whether
>> they do that by continuing to use 1.1 or by developing a new protocol.
>>
>> Not that there would be any difference in practice since if HTTP/2.0 does
>> not support Web Services in a sane fashion there will eventually be a 2.0
>> for Web Services.
>>
>>
>> We did know about compression libraries back in '92. Latency was a much
>> bigger concern back in those days when the whole of CERN was hanging off a
>> not much more than a T1 and we had 10base ethernet.
>>
>> The idea of header compression was rejected back then as a silly
>> optimization and I really can't understand why anyone thinks the situation
>> has changed to make it less silly.
>>
>> On Thu, Jan 24, 2013 at 5:23 PM, Roberto Peon <grmocg@gmail.com> wrote:
>>
>>> That is the rub-- this forces complexity into every web-application by
>>> forcing devlopers to have to do contingency and error cases for each
>>> potentially optional parameter.
>>> .. essentially, since people cannot rely upon it, they don't use it.
>>> This happens today with HTTP/1 and it.. really sucks.
>>>
>>> This doesn't seem like a good tradeoff when people who don't want these
>>> things or the latency benefit can simply fall-back to HTTP/1
>>>
>>> -=R
>>>
>>>
>>> On Thu, Jan 24, 2013 at 2:19 PM, Yoav Nir <ynir@checkpoint.com> wrote:
>>>
>>>>  It might end up smaller than what you need for an HTTP/1 client. But
>>>> that also allows us to implement just one protocol on the server for both
>>>> full-capability and minimal clients. Similarly for full-capabilities client
>>>> working with minimal servers.
>>>>
>>>>  On Jan 25, 2013, at 12:08 AM, Roberto Peon <grmocg@gmail.com> wrote:
>>>>
>>>>  So... why would someone who didn't want these things use HTTP/2
>>>> instead of HTTP/1?
>>>>
>>>>  -=R
>>>>
>>>>
>>>> On Thu, Jan 24, 2013 at 2:03 PM, Yoav Nir <ynir@checkpoint.com> wrote:
>>>>
>>>>>
>>>>> On Jan 24, 2013, at 9:01 PM, Nico Williams <nico@cryptonector.com>
>>>>> wrote:
>>>>>
>>>>> > On Thu, Jan 24, 2013 at 12:41 PM, William Chan (ι™ˆζ™Ίζ˜Œ)
>>>>> > <willchan@chromium.org> wrote:
>>>>> >>> The main one is that the receiver has to have enough memory to
>>>>> store the
>>>>> >>> dictionary.
>>>>> >>
>>>>> >> I think this boils down to the argument on the other thread. Do the
>>>>> >> gains for keeping state outweigh the costs? Note that given
>>>>> Roberto's
>>>>> >> delta compression proposal, the sender can disable compression
>>>>> >> entirely, so the receiver does not need to maintain state. Browsers
>>>>> >> probably would not do this, due to our desire to optimize for web
>>>>> >> browsing speed. For web services where you control the client, you
>>>>> >> indeed would be able to disable compression.
>>>>> >
>>>>> > IMO we need stateful compression to be absolutely optional to
>>>>> > implement.  (If we choose to go with stateful compression in the
>>>>> first
>>>>> > place.  I think we shouldn't.)
>>>>>
>>>>>  I think we need to do a little more. I think we should define a
>>>>> "minimal implementation" and have a way for client and server to signal
>>>>> this. A minimal implementation would not be able to do any or some of these:
>>>>>  - compression
>>>>>  - server-initiated streams
>>>>>  - stream priority
>>>>>  - credentials
>>>>>  - all but a small set of headers.
>>>>>  - multiple concurrent streams
>>>>>
>>>>> Maybe we need a CAPABILITIES control frame that will allow client or
>>>>> server to communicate to the other what capabilities they don't have.
>>>>>
>>>>> A truly minimal client would be capable of one stream at a time -
>>>>> really down to HTTP/1.0 functionality with the new syntax.
>>>>>
>>>>> Would this allow Phillip to use HTTP/2 for minimalist web services?
>>>>>
>>>>> Yoav
>>>>>
>>>>>
>>>>
>>>>
>>>> Email secured by Check Point
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Website: http://hallambaker.com/
>>
>
>

Received on Thursday, 24 January 2013 22:54:12 UTC