RE: [Fwd: I-D ACTION:draft-decroy-http-progress-00.txt]

S. Mike Dierken wrote:
 >
 > Couple questions about the draft:
 >
 >"Frequently problems occur where users of client agents give up 
waiting for
 >visible progress."
 >When I read this, I first thought the issue was not displaying 
progress of
 >the upload itself - but it seems that the issue is with waiting for an
 >indication of a response, or an indication that a response is being 
worked
 >on. Is that correct?

Yes that's correct.  The problem here is largely a human one - people don't
like waiting, and especially if they don't know why or have any clue as 
to how
long.  So this is purely to provide some visible feedback to the person at
the screen about what is going on, and why the delay.

It's quite a different issue to the one of flow-control also discussed 
in the
draft.

 >"There is a clear need for upstream agents to be able to provide timely
 >progress notifications to downstream users [...]"
 >Later in the document, an example of multiple back-and-forth messages is
 >given - could the user agent indicate that this back-and-forth 
messaging is
 >actually a sign of progress?

That example was an example of the flow-control problem rather than the
lack of visual progress problem.  The progress notifications header was
intended to solve user timeouts (giving up) in scenarios where a response
to the user will be a long time coming, e.g:

a. logging into a slow IMAP webmail server with many folders can take a long
time to see any response (my webmail takes 4min).
b. generation of large reports, where generation takes a long time before
any data can be sent
c. proxies scanning content.
d. SOAP requests that take a long time to process
e. etc

 >
 >Much of the document describes a single use-case dealing with 
authentication
 >and NTLM specifically. Are there other specific situations other than
 >authentication that can be described? Are there other styles of
 >authentication other than NTLM that would benefit from a solution?

It's easy to demonstrate with Basic auth as well.  The numbers of retries
are slightly better, but still bad.

 >
 >What other approaches have been considered?

Progress indication
-------------------

The approaches depend on the exact situation.  For example for AV scanning
at a gateway, there are 2 common approaches taken by proxy vendors:

1. Drip-feed a proportion of the resource through to the client whilst it
is being downloaded, and abort the rest of the resource if the scanning 
shows
a virus.

2. Redirect the browser to a "patience page", with a refresh set which shows
updates and progress, and finally a link to download the resource.

Problems with the first approach are:

a. you can't guarantee it is even safe to send even say 75% of the resource
through to the client
b. clients have extreme problems diagnosing what went wrong with their 
download
(need to view resources with binary editors to see appended error messages),
so many clients simply retry many times before giving up.

problems with the second approach.

a. works ok for downloads which aren't to be presented in a browser, but not
good for scanning of large and complex HTML pages.
b. synchronisation issues - proxy can't guarantee the redirect will come 
down
the same connection as the original request
c. difficulties for automated clients doing downloads, as this is a non-
protocol-level handling for the issue.

For flow-control
----------------

The 100 continue is the intended solution to the problem, and whilst I 
can see
how it could be effective in direct client-server communications, there are
issues with it when there are intermediaries or delays.

The alternative proposed of sending chunked resource from the client has 
other
problems, notably:

a. issues when a proxy is connecting to an HTTP/1.0 server.  Unless it 
knows
apriori that the server is HTTP/1.1 compatible it can't send chunked 
resource anyway.
Clients are in the same boat, and that's why I think there aren't any 
(that I have
found) that send chunked data, since they would need to maintain a 
database of
internet webservers to keep track of what server supported chunking or 
not - a
fairly low-return on investment.

b. loss of information on which to base policy decisions.  Unless you 
can set the
content-length field as well?

c. implementation complexity -> compatibility issues with non-compliant 
clients
servers and intermediaries.  An additional status code for a client to 
see is fairly
low-impact, compared to servers and proxies suddenly seeing chunked 
resource
from a client.

Protocols have had 2 signals for flow control since year dot.  RS232 had 
RTS/CTS
Xmodem had X-on/X-off.   The one notable exception is ethernet, which 
uses carrier
sense, and has associated issues not dissimilar to those of HTTP: e.g. 
when you
get a collision, you have to resort to waiting tactics.
Ethernet only really works well like this because it has large bandwidth 
and low
latency, so the collision retry wait times are very small.  Something 
which the
internet unfortunately cannot guarantee.

 >Regarding a connection oriented authentication approach, if the client
 >submitted a request with no body (like a GET) would that be sufficient to
 >establish the credentials that could be used with a larger request 
later on?

not guaranteed to work.

 >Regarding large message bodies, would using chunked transfer-encoding 
with a
 >small initial chunk be useful to quickly 'probe' for these chained
 >authentication routes?
 >

Regards

Adrien de Croy

Received on Monday, 12 February 2007 19:51:56 UTC