- From: Wesley Oliver <wesley.olis@gmail.com>
- Date: Thu, 4 Jul 2019 13:48:33 +0200
- To: Robin MARX <robin.marx@uhasselt.be>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <CACvHZ2Yxvy3GyXRXy2-9xjS3NqoeFNEtsNcTwWEGntS5pwEDOg@mail.gmail.com>
Name: Wesley Walter Anton Oliver 1. Give an overview of your idea Investigate the possibilities of improve network quality of realtime communication, that required guaranties to ensure they get required capacity, latency and through put. Evaluate all different routing protocols or the ones that vodacom is using in its backbones and BS, such as OSFP, PGP, BGP… See if we can dynamically configure and change there ratio in which packet traffic is interlaced over an aggregate pipeline. This would require detecting ip and port combinations for different services, to greater detail that Quality of Service in ip layer. Also require the ability to dynamic change the ratio on detect and lookup of clients account. I am sure so sick the terrible call quality and realtime video thought put of current internet. 1. What company strategy does your idea speak to and please explain? Your idea can speak to more than one strategy. (Cost Reduction, Revenue Growth and/or Improve NPS) Higher quality network for realtime communications, more product diversity and guaranties you provide to the customer and for him understand things better. In the long term, better planing of network capacity were it is required. Opens door for different types of product offerings, that can differentiate one self in the market to be the market leader. 1. What do you believe is the 5-year value/ benefit of your idea to Vodacom? This can be revenue, cost saving, NPS points, etc. revenue, ability to offer more diverse set of packages, cost saving(better capacity planing capabilities), more reliability and simpler to diagnose capacity complaints. Especially with tv going to require 100mbit lines for like 16K video. 1. What makes your idea innovative? Simple fact that no is doing this and the industry, is well dragging its feet when it comes to quality. Fact no will commit to providing guaranteed capacity at this level aggregation isp end client. Everything else typically best effort!! Be the first network to be able to truly provide realtime communication guarantees above what current protocols that are under development. 1. Does the commercial implementation of your idea require coding? Well I am sure, there are various script and configuration, and integration to build out product across the whole network on a grand scale. 1. Do you know if your idea will impact on any of Vodacom’s current systems? If so, how? Hopefully no more shitting realtime communication and calls, with packet jitter and delays. When implemented, would improve the network quality and the types of services as it slowly rolled out and testing across the network. Will potentially require reconfiguration of Cisco routine equipment and some other equipment, to higher and better standards that international community seem to have done, as far as I am where of. 1. Are you aware if your idea is currently on a roadmap within Vodacom, if yes please detail No chance!! No internet gives a hoot, apart from the users about realtime communication quality. Few people like to challenge way things are done to be better. 1. Detail your idea and outline how you will create a proto-type to demonstrate at the hackathon Well this would require first know what backend system are doing, then go about investigation current capabilities of the router os software, look for a way to be able to write script and code, configure those device in this way. If configure them in this way do a test of the configuration and see it works, would require some client server communicate faking data. Then next step is to see integrate dynamic configuration and auto detection and lookup product reserved capacity from local cache server on BS site. Start with the basics and then build on from there. This would require building a test network for using their equipment over a single simulated backbone pipeline in office, connection to simulating pc, to confirm routing and interlacing. 1. Please state why you think your idea should be chosen Traditional, the internet been open, however, the data that is moving over the networks today, now required guarantees for quality of the end service that is provided to the end user. Traditional we have all resisted this type of package creation, however, in todays world we want guarantee, which mean, need to know from which physically points in network, you are required to provide these guarantee. I would not buy package for video stream, were you can’t guaranty me quality of 16K picture, because why waist all the money on the tv in the first place. Opens up the market to provide more dynamic packages, that are better than the current ones. Improves ability to plan, because clients are now committing to a package that offers them their desired quality. Mainly so that we can disrupt the market with better quality offering of services, open the doors open to many more applications in the future, that other networks wills’t be able to provide, with out undergoing the same changes. They will be on the back foot, trying to figure it all out, while we be pushing the boundaries of quality for realtime data reliability in the future. Hi, What are the limitations? I was thinking of the ability to setup Cisco routing equipment, such that it can better route the internet traffic over the network and achieve a better overall realtime communication and reliability than we seem to current have and Qos can provide! There are different types of traffic, which can be categorised as follows: 1. Reserved Minimum capacity on a pipe(back bone link) a. Smart T.V Video -SD -702HD -1080HD - QD - 8K -16K b. Mobile Voice Calls c. Radio Streaming service d. Voip e. Realtime Medical surgery 2. Non reserved is basically all remaining data traffic, that can experience congestion and packet delays, and jitter, In which will not affect there quality and performance of the server. Look into the ability too dynamic setup routing and firewall policies for the bgp and like osfp ideally that used By Vodacom in there telecoms backend actually, relevant. What one is looking to do is determine the ratio of the different reserve capacity above, that the pipe only A back ground stretch should be interacting packets on a most ideal 1,2,3,4,5,6 round robin from each point with queue data. This would ensure that along as the realtime transmission of packets remains under there reserve capacity per second There will be no congestion and thus no delays or jitter or call break ups, or killing anyone in a realtime operation. What is need is for each route to be able to detect the different types of streams and then lookup there connection account subscription information, To determine what there reservations are, wether they paying for 5 mb reservation or 50mbit reservation to respective services. So detect source and category, look up the type of reservation required and then update the aggregation pipeline to capacity reservations total, while those data packets are being sent and still detected. Then also for each mobile call over it, also register and dynamic update the ratios. The remaining capacity on the line, can then be used for general data, that doesn’t need realtime communication guarantees that avoid congestion! So when there is less than the maximum reserver capacity on the line currently being used, the rest can be used for data, which can then Leed to the isp(vodacom) using it as burst data to provide better data packages, with base 10mbits, with content ratio and share density of 20:1, like all isp back in the day used to do. Then other the ability to burst to 100mbits or 1gigbits, if the back bones capacity for reservation capacity is being under utalitzed, let the data category take advantage of it. One of the things is one also needs to look into detecting when transmission of streaming packets has stopped, to dynamically reduce and update the reserved capacity on the line. In the long run, one needs a protocol to do this, so that for medical operations, there is no detecting, its becomes more of registration process to guaranty allay between source and destination that this reservation is put in place. The routing system is going to have notify both end immediately with super high packet priority system, that reservation has broken down Otherwise, realtime operations, could have few bag results, unless machine can only move to position on packet being received and that distance of change per packet can be warranty to be kept to 10000 of mm per position packet. We still need both parties to know that things going wrong asap, so take preventative measure immediately. Guess this would be the investigation into the backend routing equipment and firewalls and tech that vodacom uses, Such that we can see if we be able to configure device to operate in such a way, if so see how far we can take it. Ultimately it will change isp market to sell voice package reservation for sd, HD + video provide to which they have mirror video server to warranty streaming quality experience. Then sell separate traditional isp packages for data with all there content ratio. The whole industry is going to have to change in the long term. Kind Regards, Wesley Oliver On Thu, Jul 4, 2019 at 9:05 AM Wesley Oliver <wesley.olis@gmail.com> wrote: > Hi, > > I have to now play catch-up with all of this. South Arica great place for > employment, where boundaries can be pushed! Cough Cough!! More like > unemployment, rubbish and constructive dismissals.. > > Just maybe two things that I could highlight, with regards to hope that > the new spec addresses these things some were. I am going to have very > quickly over this weekend come through everything, > and then I will know for sure. But at least this would high live design > requirements I would like to see QUICK and http address. > However, I see that the draft is near a closing. > > I would separate QuickHttp into to different layers, which thing it is, > current.. > > 1. Quick Transport, Reliability, Address Buffer Bloat for real-time > communications.(Gaming/Voice/Robotics) > > 1.1 Payload at Datagram and Packet-Level, by using a prioritization byte, > which which is easily accessible to router hardware > to inspect, allowing data in router queues, to be interlace differently, > No longer FIFO, which would address buffer bloat. > > 1.2 This would allow for true real-time communication below QoS, that the > application can control, so for my gaming communication > I can send data image payloads of custom decl and then priorities, voice > traffic in coms channels over my data traffic, then prioritize a set of > events > over the voice traffic. Example: Kill event, should have priority over > location event and event environment change, so distributed consense > algorithm can be even more accurate when there is congestion. > This would allow real-time communications, were wired networks versus > wireless networks can experience sudden and massive impedance mismatches, > causing > massive packet loss in the last mile of connection, by allow real-time > priority byte, copious amounts of data and be buffer at a Base Station, > while still being flushed, > without needing to retransmit packets over the whole internet, for the > origin. LTE should allow mobile to communicate, a priority of how it would > like all communications, to be flushed out to it, > base on the handset current positions. > > > 1.3 The protocol must support direct memory access writing to NIC's, so > that we can improve hardware internal bandwidth. > This would require two buffers, main head buffer and a data buffer. The > main header buffer holds reference from there the NIC should > read and flush the data out of NIC. Either from direct memory reference > range or range in the data buffer. > One important thing to remember here is that there needs to be the ability > to dynamically change the priority of the flushing > as new async resource or events are produced, So basically then need > another micro Priority tree from were > to full and flush all the data from. > > 1.4 Ensure that AK and NAKs which have to be sent of the upstream, can get > caught up in congestion of the upstream, decoupling.. > Maybe QoS identifier for AK and NAK packets, so that easily can shape the > traffic, so that the downstream doesn't get impeded by its dependence > on the upstream. > > > 2. Http 3.0, where the server can flush async resource dynamically as they > become available. > > 2.1 Support for advance pagination hinting, the example below: > > The traditional approach to pagination: > > DefaultPageSize=25 > -- > > Request: > > Get http://...?Page=5 > > Headers: normal.. > > > Response: 25 Items traditional > --- > > Repeat... or change the page size in the request, but this prevents good > use of caching all round. > > > New Approach: > > DefaultPageSize-25 > -- > > Request: > > Get http://...?Page=5 > > Headers: Hint-PageSize=200 > > --- > > Server, reconise page hint, and pulls 200 align records to Page 5, > > It then response with 25 IItems, like traditional repsonse to the Get > request. > Then pushes Page6 throught to 5 + 200/PageSize (8) = 12 as seperate > repsonse > in the minor granularity of 25 Items a page. > Push: get http://...?Page=5 > Push: get http://...?Page=6 > Push: get http://...?Page=7 > Push: get http://...?Page=8 > Push: get http://...?Page=9 > Push: get http://...?Page=10 > Push: get http://...?Page=11 > Push: get http://...?Page=12 > > > > To prevent any race conditions, request from the client, responding to a > page request while still inflight, If there > > server detects it is still pushing it, while it has yet to get AK for > completion, it will just ignore the request or transmit a repsonse or > > inflight for debbugging.. This would allow the client to just match out > going with income requests while in flight. > > > > This could be difficault if one was to parrallel async all the page > request, because then, need to have a Batch Request Header, so server align > the requests. > > So somthing like this: > Headers: > - BatchRequestGuidGroupID: > - BatchRequestFirstPageSequence: > > This would allow mutiple http parrallel request to the server, to figure > out which pages to return as pushes.. > > > > 3. Cisco and equipment manufacture.. > > 3.1 Implemented more advance sub QoS queuing management in quick. > 3.2 Implemented the ability or ensure that this can be dynamically > reserved Different capacities. aka shapping, however, > interlace packets at specific reserver bandwidth allocations, to preserve > real-time communications and video quality. > Video:50%, BandWidth reserver at client level, based on the subscription > of 40Mbits dedicate reserved bandwidth to video server across the internet > for them, > whether use or un-used, that paying a premium for a bandwidth reserve for > there 4K video T.V. No more downscaling when congestion. > Calls: 20%, no more gitter and call break up. > Gaming:20% > Data: 10% - What can happen here, loads of congestion. > > Basically, per connection endpoint of fibre, there is profile, that the > backbones would have to look up the video subscription of bandwidth pipe > reserver that is being paid for > , then dynamic reserve that bandwidth, when detect connection being opened > in the video space. > > 3.3 Implemented new bandwidth handling for AK's and NAKS packets on the > upstream because congestion of the upstream will result in reducing ak and > increased RTT. > Otherwise, the downstream gets impeded and will not achieve max bandwidth, > because it is dependent on the upstream congestion, the two have yet to be > decoupled... > as far as I know, can see. > > 4: 6G > > 4.1 To implement a datagram level flushing priority mechanisms that the > handset can communicate to the BS, based on what the current user is doing. > > 4.2 Address congestion of upstream NAK and AK, using TMDA or somthing more > impressive, so that downstream bandwidth is not constrainted by a > dependancies on the upstream AK/NAK > > > That is everything that I know to be a current problem... > Can forward hackathon suggestion to Vodacom too.. for the CISCO and > dynamic pipe reservation for traffic on pipes.. > I hate bad quality video and down sampling. > > > Sorry I catch up! > > Kind Regards, > > Wesley Oliver > > On Wed, Jul 3, 2019 at 6:11 PM Robin MARX <robin.marx@uhasselt.be> wrote: > >> Hello everyone, >> >> As I expect most of you know, there has been quite a bit of talk over at >> the QUIC / HTTP/3 working group recently on what to do with the dependency >> tree / prioritization system from HTTP/2 in H3. >> >> There are two major issues: >> - There a a few subtle HOL-blocking issues in porting the system to H3 >> due to the way streams work in QUIC >> - Quite a few people feel H2's approach is overly complex and can/should >> be simplified >> >> For now, the QUIC wg has taken the stance to try and stay as close to H2 >> as possible (e.g., exclusive priorities had been removed before, but are >> now back in the editor's draft of HTTP/3). >> The QUIC wg wishes to see more real implementation experience and >> experimental results for new proposals before considering them. It also >> feels this issue is best discussed in the httpbis wg. And thus we come to >> this email. >> >> We have been recently running some prioritization experiments for a >> variety of schemes and proposals using our own HTTP/3 implementation in the >> Quicker project. >> We have discussed our findings in a paper, which you can find in >> attachment and also on https://h3.edm.uhasselt.be. >> >> The paper attempts to provide a rather extensive discussion of the issues >> with H2's setup, H3's approaches so far and the alternative proposals that >> have been made. >> As I appreciate not everyone has the time to read all of that, our main >> findings are: >> >> 0) The current proposal (which should be "draft-21" soon) for HTTP/3 >> works well in practice, though the semantics of the "orphan placeholder" >> might still need to be tweaked a bit. >> >> 1) Simpler setups are also perfectly viable.. The main contender, from >> Patrick Meenan ( >> https://github.com/pmeenan/http3-prioritization-proposal/blob/master/README.md) >> would be a good candidate for this. >> >> 2) However, there is no single scheme that produces ideal results for all >> web pages (e.g., the scheme that is best for page A can perform really >> badly for page B). So dropping everything for a single, simpler approach is >> potentially sub-optimal. Similarly, the current approach of browsers of >> just using a single scheme for all pages might need revision. >> >> 3) Ideally, we should thus allow the scheme to be tweaked per-page, >> either via a mechanism where the server indicates the optimal scheme to the >> client (which we propose in the paper), or where the client communicates >> additional metadata to the server (e.g., resource is blocking/non-blocking, >> can be processed progressively, ...) to make server-side prioritization >> easier (Kazuho Oku is working on a proposal for this, but doesn't feel it's >> ready to share here yet). >> >> 4) In order to make progress on H3, it's probably best to stick with the >> draft-21 approach (potentially with a few more small tweaks) and define a >> new approach as an extension or implement it at the higher HTTP layer >> (i.e., as HTTP headers, rather than H3 frames). However, would that then >> find enough adoption fast enough... >> >> While I'll be the first to admit our study isn't terribly extensive or >> fully realistic (we tested 40 pages in lab settings without a real >> browser), I still feel our results are enough to have a basis to continue >> the discussion on. We of course encourage others to share their results as >> well. >> Some more background information can be found here as well: >> https://github.com/quicwg/wg-materials/blob/master/interim-19-05/priorities.pdf >> >> I'm a bit unsure what the best questions are to ask at this point, but >> some attempts: >> - Are implementers willing to implement 2 completely different approaches >> (1 for H3, 1 for H2)? >> - Are (browser) implementers willing to consider supporting multiple >> schemes (specific trees)? >> - Are (server) implementers willing to create/support more complex >> (user-driven) server-side prioritization config/APIs? >> - How important is it to move to a simpler (and thus less flexible) >> setup? >> - Should this be a blocker for HTTP/3 or not? >> >> Looking forward to your feedback. >> With best regards, >> Robin >> >> -- >> >> Robin Marx >> PhD researcher - web performance >> Expertise centre for Digital Media >> >> T +32(0)11 26 84 79 - GSM +32(0)497 72 86 94 >> >> www.uhasselt.be <http://www.uhasselt..be> >> Universiteit Hasselt - Campus Diepenbeek >> Agoralaan Gebouw D - B-3590 Diepenbeek >> Kantoor EDM-2.05 >> >> >> > > -- > ---- > GitHub:https://github.com/wesleyolis > LinkedIn:https://www.linkedin.com/in/wesley-walter-anton-oliver-85466613b/ > Blog/Website:https://sites.google.com/site/wiprogamming/Home > Skype: wezley_oliver > MSN messenger: wesley.olis@gmail.com > -- ---- GitHub:https://github.com/wesleyolis LinkedIn:https://www.linkedin.com/in/wesley-walter-anton-oliver-85466613b/ Blog/Website:https://sites.google.com/site/wiprogamming/Home Skype: wezley_oliver MSN messenger: wesley.olis@gmail.com
Received on Thursday, 4 July 2019 11:49:09 UTC