- From: Wesley Oliver <wesley.olis@gmail.com>
- Date: Thu, 4 Jul 2019 09:05:17 +0200
- To: Robin MARX <robin.marx@uhasselt.be>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <CACvHZ2btmiirYsAwe8kMX==BKR0qprdiNs1917Jmj4+QbaPxGw@mail.gmail.com>
Hi, I have to now play catch-up with all of this. South Arica great place for employment, where boundaries can be pushed! Cough Cough!! More like unemployment, rubbish and constructive dismissals.. Just maybe two things that I could highlight, with regards to hope that the new spec addresses these things some were. I am going to have very quickly over this weekend come through everything, and then I will know for sure. But at least this would high live design requirements I would like to see QUICK and http address. However, I see that the draft is near a closing. I would separate QuickHttp into to different layers, which thing it is, current.. 1. Quick Transport, Reliability, Address Buffer Bloat for real-time communications.(Gaming/Voice/Robotics) 1.1 Payload at Datagram and Packet-Level, by using a prioritization byte, which which is easily accessible to router hardware to inspect, allowing data in router queues, to be interlace differently, No longer FIFO, which would address buffer bloat. 1.2 This would allow for true real-time communication below QoS, that the application can control, so for my gaming communication I can send data image payloads of custom decl and then priorities, voice traffic in coms channels over my data traffic, then prioritize a set of events over the voice traffic. Example: Kill event, should have priority over location event and event environment change, so distributed consense algorithm can be even more accurate when there is congestion. This would allow real-time communications, were wired networks versus wireless networks can experience sudden and massive impedance mismatches, causing massive packet loss in the last mile of connection, by allow real-time priority byte, copious amounts of data and be buffer at a Base Station, while still being flushed, without needing to retransmit packets over the whole internet, for the origin. LTE should allow mobile to communicate, a priority of how it would like all communications, to be flushed out to it, base on the handset current positions. 1.3 The protocol must support direct memory access writing to NIC's, so that we can improve hardware internal bandwidth. This would require two buffers, main head buffer and a data buffer. The main header buffer holds reference from there the NIC should read and flush the data out of NIC. Either from direct memory reference range or range in the data buffer. One important thing to remember here is that there needs to be the ability to dynamically change the priority of the flushing as new async resource or events are produced, So basically then need another micro Priority tree from were to full and flush all the data from. 1.4 Ensure that AK and NAKs which have to be sent of the upstream, can get caught up in congestion of the upstream, decoupling.. Maybe QoS identifier for AK and NAK packets, so that easily can shape the traffic, so that the downstream doesn't get impeded by its dependence on the upstream. 2. Http 3.0, where the server can flush async resource dynamically as they become available. 2.1 Support for advance pagination hinting, the example below: The traditional approach to pagination: DefaultPageSize=25 -- Request: Get http://...?Page=5 Headers: normal.. Response: 25 Items traditional --- Repeat... or change the page size in the request, but this prevents good use of caching all round. New Approach: DefaultPageSize-25 -- Request: Get http://...?Page=5 Headers: Hint-PageSize=200 --- Server, reconise page hint, and pulls 200 align records to Page 5, It then response with 25 IItems, like traditional repsonse to the Get request. Then pushes Page6 throught to 5 + 200/PageSize (8) = 12 as seperate repsonse in the minor granularity of 25 Items a page. Push: get http://...?Page=5 Push: get http://...?Page=6 Push: get http://...?Page=7 Push: get http://...?Page=8 Push: get http://...?Page=9 Push: get http://...?Page=10 Push: get http://...?Page=11 Push: get http://...?Page=12 To prevent any race conditions, request from the client, responding to a page request while still inflight, If there server detects it is still pushing it, while it has yet to get AK for completion, it will just ignore the request or transmit a repsonse or inflight for debbugging.. This would allow the client to just match out going with income requests while in flight. This could be difficault if one was to parrallel async all the page request, because then, need to have a Batch Request Header, so server align the requests. So somthing like this: Headers: - BatchRequestGuidGroupID: - BatchRequestFirstPageSequence: This would allow mutiple http parrallel request to the server, to figure out which pages to return as pushes.. 3. Cisco and equipment manufacture.. 3.1 Implemented more advance sub QoS queuing management in quick. 3.2 Implemented the ability or ensure that this can be dynamically reserved Different capacities. aka shapping, however, interlace packets at specific reserver bandwidth allocations, to preserve real-time communications and video quality. Video:50%, BandWidth reserver at client level, based on the subscription of 40Mbits dedicate reserved bandwidth to video server across the internet for them, whether use or un-used, that paying a premium for a bandwidth reserve for there 4K video T.V. No more downscaling when congestion. Calls: 20%, no more gitter and call break up. Gaming:20% Data: 10% - What can happen here, loads of congestion. Basically, per connection endpoint of fibre, there is profile, that the backbones would have to look up the video subscription of bandwidth pipe reserver that is being paid for , then dynamic reserve that bandwidth, when detect connection being opened in the video space. 3.3 Implemented new bandwidth handling for AK's and NAKS packets on the upstream because congestion of the upstream will result in reducing ak and increased RTT. Otherwise, the downstream gets impeded and will not achieve max bandwidth, because it is dependent on the upstream congestion, the two have yet to be decoupled... as far as I know, can see. 4: 6G 4.1 To implement a datagram level flushing priority mechanisms that the handset can communicate to the BS, based on what the current user is doing. 4.2 Address congestion of upstream NAK and AK, using TMDA or somthing more impressive, so that downstream bandwidth is not constrainted by a dependancies on the upstream AK/NAK That is everything that I know to be a current problem... Can forward hackathon suggestion to Vodacom too.. for the CISCO and dynamic pipe reservation for traffic on pipes.. I hate bad quality video and down sampling. Sorry I catch up! Kind Regards, Wesley Oliver On Wed, Jul 3, 2019 at 6:11 PM Robin MARX <robin.marx@uhasselt.be> wrote: > Hello everyone, > > As I expect most of you know, there has been quite a bit of talk over at > the QUIC / HTTP/3 working group recently on what to do with the dependency > tree / prioritization system from HTTP/2 in H3. > > There are two major issues: > - There a a few subtle HOL-blocking issues in porting the system to H3 due > to the way streams work in QUIC > - Quite a few people feel H2's approach is overly complex and can/should > be simplified > > For now, the QUIC wg has taken the stance to try and stay as close to H2 > as possible (e.g., exclusive priorities had been removed before, but are > now back in the editor's draft of HTTP/3). > The QUIC wg wishes to see more real implementation experience and > experimental results for new proposals before considering them. It also > feels this issue is best discussed in the httpbis wg. And thus we come to > this email. > > We have been recently running some prioritization experiments for a > variety of schemes and proposals using our own HTTP/3 implementation in the > Quicker project. > We have discussed our findings in a paper, which you can find in > attachment and also on https://h3.edm.uhasselt.be. > > The paper attempts to provide a rather extensive discussion of the issues > with H2's setup, H3's approaches so far and the alternative proposals that > have been made. > As I appreciate not everyone has the time to read all of that, our main > findings are: > > 0) The current proposal (which should be "draft-21" soon) for HTTP/3 works > well in practice, though the semantics of the "orphan placeholder" might > still need to be tweaked a bit. > > 1) Simpler setups are also perfectly viable.. The main contender, from > Patrick Meenan ( > https://github.com/pmeenan/http3-prioritization-proposal/blob/master/README.md) > would be a good candidate for this. > > 2) However, there is no single scheme that produces ideal results for all > web pages (e.g., the scheme that is best for page A can perform really > badly for page B). So dropping everything for a single, simpler approach is > potentially sub-optimal. Similarly, the current approach of browsers of > just using a single scheme for all pages might need revision. > > 3) Ideally, we should thus allow the scheme to be tweaked per-page, either > via a mechanism where the server indicates the optimal scheme to the client > (which we propose in the paper), or where the client communicates > additional metadata to the server (e.g., resource is blocking/non-blocking, > can be processed progressively, ...) to make server-side prioritization > easier (Kazuho Oku is working on a proposal for this, but doesn't feel it's > ready to share here yet). > > 4) In order to make progress on H3, it's probably best to stick with the > draft-21 approach (potentially with a few more small tweaks) and define a > new approach as an extension or implement it at the higher HTTP layer > (i.e., as HTTP headers, rather than H3 frames). However, would that then > find enough adoption fast enough... > > While I'll be the first to admit our study isn't terribly extensive or > fully realistic (we tested 40 pages in lab settings without a real > browser), I still feel our results are enough to have a basis to continue > the discussion on. We of course encourage others to share their results as > well. > Some more background information can be found here as well: > https://github.com/quicwg/wg-materials/blob/master/interim-19-05/priorities.pdf > > I'm a bit unsure what the best questions are to ask at this point, but > some attempts: > - Are implementers willing to implement 2 completely different approaches > (1 for H3, 1 for H2)? > - Are (browser) implementers willing to consider supporting multiple > schemes (specific trees)? > - Are (server) implementers willing to create/support more complex > (user-driven) server-side prioritization config/APIs? > - How important is it to move to a simpler (and thus less flexible) setup? > - Should this be a blocker for HTTP/3 or not? > > Looking forward to your feedback. > With best regards, > Robin > > -- > > Robin Marx > PhD researcher - web performance > Expertise centre for Digital Media > > T +32(0)11 26 84 79 - GSM +32(0)497 72 86 94 > > www.uhasselt.be <http://www.uhasselt..be> > Universiteit Hasselt - Campus Diepenbeek > Agoralaan Gebouw D - B-3590 Diepenbeek > Kantoor EDM-2.05 > > > -- ---- GitHub:https://github.com/wesleyolis LinkedIn:https://www.linkedin.com/in/wesley-walter-anton-oliver-85466613b/ Blog/Website:https://sites.google.com/site/wiprogamming/Home Skype: wezley_oliver MSN messenger: wesley.olis@gmail.com
Received on Thursday, 4 July 2019 07:05:53 UTC