Prioritization and retransmission within MEMUX

	Over the last year I have been working on methods of accelerating web 
access, particularly over modem links, and have found that a protocol similar
to MEMUX would be quite useful but there are a couple of places that MEMUX
differed that I wanted to mention: (I have reviewed the mailing archive and
http-ng documents and have not seen the topics discussed except as noted later
in this post - if they have been, please send me a pointer)

	1) Lack of prioritization (critical)
	2) Layering on top of reliable protocol (potential latency hit)

In brief, my approach is to optimize the ordering of object delivery to
reduce the time to see document text and on-screen images.   A single
multiplexed connection (similar to MEMUX) with strict priorities and
round-robin within same priorities (dissimilar to MEMUX) is used such that
first the HTML is sent, then all visible images on the page (i.e. not
scrolled off the screen) are simultaneously sent and then finally off-screen 
images.  Link bandwidth can be further directed by placing the mouse cursor
in images of interest giving those images highest priority.  This scheduling
policy, especially when combined progressive image coding and HTML
compression can result in a significant reduction in the time to deliver
content (very roughly 2-5x depending on web page and metrics.) A more complete 
description can be found in an Infocom'99 paper: 
http://infopad.eecs.berkeley.edu/~gilbertj/JMG_INFOCOM99.pdf

	While I realize that there are important complexity concerns that 
motivated the decision to layer on top of a reliable protocol as well as omit 
prioritization, I wanted to make the cases for their consideration.

1) Prioritization
	A web page can contain many objects - images, scripts, css, etc - 
however only the on-screen images effect loading time perceived by the
user.  (As images are scrolled on-screen, they have to be given higher
priority.)  If HTTP/1.0-style loading is used, off-screen images will
be loaded during the same time period that on screen images are loaded and 
thus extend the time to load onscreeen images.  Using MEMUX where all objects
were transfered simultaneously would be even worse than HTTP/1.0 from this
perspective (for large pages at least).  A similar case can be made for 
prioritizing all HTML (or at least on-screen HTML if the browser could 
make the distinction) above all images - the HTML is used format the page
and contains refs to images and it is thus good to load it first.  Additionally
it is in a sense the "coarsest" layer of the page - with just the text much of 
the semantic content is available.  Also tables often require all of their 
HTML before rendering can occur.  

	While a pipelined approach such as HTTP/1.1 effectively prioritizes
HTML over on-screen images over off-screen images, it, of course, has the
disadvantage that image meta-data is delayed and most of the benefits of
progressive images are also lost well (my paper goes into depth on this.)

	What I would propose would be a strict-priority system  
Although the MEMUX protocol would honor priorities, the assignment
of priorities would, of course, be done by the browser for the web
application and other applications could prioritize in any manner they
please.  The same generic prioritization scheme could also be used to implement
control/data etc.  I just used a simple strict-priority round-robin with
ties going to whichever channel had sent less data so far though I am 
sure that many other good schemes exist.  Priority can be sent upon 
opening the channel and through explicit reprioritization messages though
priority info is not sent normal data messages.  

	I noticed that the SMUX section under priorities mentioned a potential
problem of denial of service attacks with priorities - if someone could 
elaborate on that, I would appreciate it.  I was assuming that a single SMUX
connection would be used by one application or user/entity.  In the case
of the web, one connection between browser/server or browser proxy.  If
multiple distinct users were to share, it might make sense to do a hierarchical
system either explicitly or implicitly by having a master MEMUX session 
multiplexing users (with whatever high-level inter-user prioritization that
was desired) and then each of the channels of this master MEMUX session could
be MEMUX sessions themselves with the priorities set by the applcation.  That
way each user/application could control how they allocated their bandwidth
although they could not effect other users any more than their share of
connections.  Though I have not given this much thought...

2) Intra-layer retransmission and scheduling

	While this one is dangerously close to the first "Out-Of-Scope" item,
perhaps if people find value in it, an in-scope variant could be devised.

	Basically the charter explicitly states that MEMUX is to be
implemented over another reliable protocol such as TCP.  While this is 
certainly good from a complexity standpoint, I believe it has at least 
two disadvantages:

	2a) Introduction of false-dependancies (definitely)
	2b) Increased scheduling latencies (probably)

2a) Introduction of false-dependancies
	If retransmission takes place at a layer below the MEMUX layer,
and a packet is dropped, then this will be detected at that lower layer and
the lower layer will have to delay the reconstruction of its data stream
until all missing packets are filled in.  Thus all MEMUX "channels" are delayed.
However, if the dropped packet did not contain data from a given channel, then 
there is no need to delay that channel while preserving its end-to-end channel 
byte stream.  Thus the layering upon a reliable byte stream makes the streams
falsely depend on each other in a retransmission sense.  While this will
not effect the total amount of time to complete a bulk transfer on the
channels, it could increase the latency to receive parts of the data.  This
is particularly important for interactive operation over lossy and/or
high-latency links (a prime example being web access over wireless links).  
Additionally, during connection restart after an idle time, when latency
is most sensitive since the HTML is transfered, I believe that packet loss
due to congestion would occur more as the link capacity is determined.  

	The web application over MEMUX is a great example of an interactive
application (i.e. needs low latency) which can very naturally take advantage
of the substream model since each object or image is independant of others.
If each channel's reliability was kept independant (i.e. MEMUX handles 
retransmission) then it could tolerate a high packet loss rate while still
achieving progress.  

	Additionally it would seem that the remote invocation layer of the
three layer http-ng would also benefit from error decopling as then 
separate remote invocations would be kept independant.  

	The charter states "(2) the multiplexing achieves the same results as 
state sharing between parallel TCP streams (which is not widely available 
today)" - please correct me if I am wrong but if MEMUX is layered on a 
reliable link, packet losses would cause delay of all streams in the MEMUX
case but not in the parallel state-shared TCP stream case.  However if it 
MEMUX performed retransmission then this would not be a problem.  Additionally, 
MEMUX could gain a performance advantage (as stated in the charter) in terms 
of eliminating new channel round-trip time, ability to merge multiple small 
chunks of data on separate substreams into a single IP packet, and possibly 
prioritization (below).  Thus if MEMUX performed its own retransmission then 
it would would seem to be a win over multiple TCP connections while if it does
not then it would be superior in some aspects yet inferior in others.  

	One could imagine the MEMUX API being independant of whether MEMUX
was layered on top of a reliable transport or performed retransmission (and
unfortunately also congestion control as well) and intial implementations
could be layered on TCP while later ones would not and thus the same 
applications could be reused but just obtain higher performance with the
second version of MEMUX.  

	I should point out that while I have implemented the modified delivery
system with prioritization, I have currently only layered it on top of TCP/IP
and have not run experiments to determine the gains of performing retransmission
(and congestion control, etc.) explicitly.  The added complexity of 
reimplementing much of TCP is certainly something to contend with.

	2b) Increased scheduling latencies (probably)

	One problem that I have encountered due to implementing interactive
web delivery over TCP is that the latency from when the user requests a change
in prioritization to when they see the new data is the sum of the data in the
network and the data in the sender's (web server/proxy's) TCP kernel buffers. 
(Actually the data in the web browser side kernel buffers would also be an
issue but typically they are fairly empty.)  This is because the multiplexing
occurs before the enqueing of the data and the highest priority data at the
time that the data is transfered into the TCP buffers might not be the
highest priority data at the time that the data is sent from the TCP buffers
over the network.  If it were defered into the kernel and happened shortly 
before transmission into the network then delay frmo user action to response
would only be determined by the amount of data in the network and not the 
kernel buffers.  However, I would imagine that there could be performance 
issues associated with performing the multiplexing in the kernel.  (Could 
someone with experience in kernel network code comment?)

	Those are just some thoughts - I know that many issues have to be 
traded off - any comments would be welcomed!  Thanks,

	Jeff Gilbert
	UC Berkeley

Received on Thursday, 18 February 1999 12:39:08 UTC