2017-05-11- Internet Protocol sub QoS Packet Prioritization for Multiplexing Streams (Http-Quick)

Hi,

I would like to get this idea out in the wild, it is a high level concept
on looking were and how one can improve and get around the limitations of
bufferbloat and TCP/IP-congestion window problems.
As of current I understand that this is architecture limitation which we
can't improve past with out implement new IP layer functionality to go
forward with.

I would like to still submit this to the Internet Protocol working group,
as the title suggests it is not just limited to the quick-http protocol.

Kind Regards,

Wesley Oliver

https://docs.google.com/document/d/1kjL7R46GMITykS8T61Nv83CX2zsFz4zjxV51WoMQXp8/edit

Internet Protocol sub QoS Packet Prioritization for Multiplexing Streams
(Http-Quick)

to address the issues of bufferbloat and TCP Congestion window.

https://datatracker.ietf.org/doc/rfc7540/?include_text=1

Abstract

The HTTP Application level communication protocol that runs on top of the
TCP/IP or UDP/IP transit layer has been given a lot of attention by google
in an attempt to improve the speed at which web pages load and increase the
efficiency of the bandwidth usage.  These core improvement concepts have
been included and form the base of http 2.0. Google has also started
working on quick, which runs http 2.0 protocol over UDP to address session
issues of TCP associated with mobile device were their ip address
continually is changing and a new TCP/IP session needs be negotiated, which
would slow down add additional overhead to page loads and interactive
communication like websockets.

This document attempts to tackle bufferbloat and tcp congestion window,
which hampers change the packet ordering while in transit. New packets
can’t be prioritized over already transmitted ones that are in transit,
such that they reached the endpoint before lower prioritized packets.

Sub QoS packet prioritization protocol in transit.

At each node in the network, packets with higher priority will be fast
forwarded to the front of the que to be retransmitted to the next node,
routers will be free to go crazy into how they implement this behaviour to
make it as efficient as possible.

If this node doesn’t have enough memory to buffer higher prioritized
packets, then it can drop a packet or multiple packets and transmit a
resource constrained packet with their sequence numbers, a range  of
sequence numbers back to the source. Those packets will then have to be
retransmitted. This would be new functionality in the extended IP layer,
The details of which would have to be investigated.

This resource constrained packet which contains IP network level packet
numbers would require them to be correlated back to transport layer packet
numbers. Which would require a callback interface from the IP layer,
publish the IP packet number for each change of the sub Qos prioritization
change to an application or transport layer buffer/callback. Such that the
transport layer or application layer could retransmitted the packet.

Typically a new factor would develop for connections that would be adjusted
dependent on the link quality and bandwidth and size of buffers, which
would once again control, how many packets can be in transit at a time, as
to reduce the number packets that node/routers will be dropping, as this
would once again degrade performance, but is the only way to open the door
to such we can future improve dependent resource multiplexing.

Implementation

Their are typically two implementation that are required for this
improvement to be realised one for ipv4 and the other ipv6. Each would add
a header byte to the IP payload respective which will act as a sub Qos
Packet prioritization for packet routing/forwarding.

Ipv4

In Ipv4 the proposal is to implement a new IP protocol versions number, to
identify this enhanced IP protocol without adding any extra overhead to
processing. The protocol will remain identical to the original, except that
the first byte in the payload of the packet will be the packet priority.
Which routes can now read and sub Qos reprioritize their packets.

Ipv6

In Ipv6 the proposal is to implement an extension protocol that adds this
additional priority byte for packets, with constraint that the extension
protocol must always come after QoS extension protocol.

Language and Library usage

Traditionally a high level protocol such as quick, TCP, UDP will run a top
of the IP layer in which the change to support this new feature will occur.
In most these protocols one will write their payload into a buffer, which
will then be package up from Transport Layer into IP packets.

The details of implementation of source code implementation and specific
buffers is not know,

So this discussion will take a slight high level theoretical approach until
getting into the network stack code to solve the problem.

The IP layer will have to expose an extension hook for the interface, which
must allow any high level application to write to a buffer a byte offset
and priority but, that the ip level, will later assemble together with the
Transport layer level protocol.

How to achieve this because the Transport layer protocol will end up
padding the application buffer, from which the offset taken for IP level.
Which would cause the the ip level buffer to have different offset, which
would not correlate.

One could do a machine learning process, where by x number of packets are
sent, assuming TCP protocol headers size is constant, to learn the how much
buffering their is. After which could just correct for the fixed offset
while interlacing the two buffer streams. This does require a warm up
period so one would experience degraded performance, except for persistent
connections, however, only could look at attempting to implement a cache
for the application initiating the connections.

Assuming that there is not a lot of pipeline and buffer of TCP/IP buffer
into the IP packets. In other words for each set of payload bytes that TCP
will encode it will directly call into the IP layer and not put on a buffer
to be process, where that next function will then encode it at IP level. In
other words for each IP packet placed on the buffer to be flush over the
wire, only one and exactly only one TCP payload will be read off the TCP
buffer.
In this case we can look to have the IP layer, compare the prioritization
buffer offset with that of the current TCP ( Application buffer) current
reposition, such correctly pair them.

If all of the previous methods for pairing fail, then the only way for this
to gain tractions would be for each transport layer protocol to implement a
prioritization buffer and correct the offsets for its headers and trails
and then write the correct offset to the ip layer buffers.

This would mean that we would have to wait a lot longer for every other
protocol to support the improve transport layer protocols opposed to be
able to immediately taking advantage of IP layer improvement.

Alternatives consider:

Hacking TCP protocol is possible it has space, but would serious violate
how the protocol was to work, due to the TCP reassembly of packet ordering
which would require modification to not block higher prioritized packets.
No would really appreciate this either and it is not simple.

UDP would be a lot easier as it could be implemented in the Google's quick
version of the protocol. It would overcome the TCP congestion window issues
and blocking behaviour of the protocol, nevertheless, it would not solve
the low level hard route buffer problems, As route would have to be upgrade
with detail support of quick datagram inspection, which would result in a
lot of work and over kill, which very few vendors would be will to free
implement firmware updates for.

Conclusion

Addressing this problem at the appropriate level of the network stack in
this case the IP layer (Network), would be the right solution, as it would
require the smallest changes at lower level of technicality to routing
device by each vendor.



-- 
-- 
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: wesley.olis@gmail.com

Received on Thursday, 11 May 2017 14:32:16 UTC