W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2008

Re: Upload negotiation

From: Henrik Nordstrom <hno@squid-cache.org>
Date: Tue, 08 Apr 2008 16:06:38 +0200
To: Adrien de Croy <adrien@qbik.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <1207663598.31831.204.camel@HenrikLaptop>

ons 2008-04-09 klockan 00:54 +1200 skrev Adrien de Croy:
> > But in 30 years I expect that HTTP has been pretty much replaced by
> > something new, more targeted for interactive transfer of large amounts
> > of data.

> heh - sounds like FTP...

My thinking was more in the line one of the streaming protocols,
allowing seeking etc without loosing the session, and low initiation
overhead to deal with the problem of increasing bandwidth but pretty
much constant rtt..

Current HTTP beats FTP on all aspects, and FTP is likely to decline
considerably over time.

> > connection. Digest is one source of inspiration on how a such session
> > oriented authentication scheme may look like without tying it to the
> > transport.
   
> I've been looking more into this... it does have some difficulties, 
> seemingly keying on the URI, and requiring the server to maintain an 
> independent cache of credential handles.

Yes, it needs a cache of the session keying material just as Digest
does.

> All doable, but quite different from session-oriented.  It's easy to see 
> why session-oriented was chosen.  Makes association of interim / 
> temporary credential handles with a user trivial in most cases.

What do you refer to here by "session-oriented"?

> I guess that's the thing.  100-continue -> timeout ->start sending is 
> the optimistic option.

Yes. You start out by assuming 100 Continue is supported, and the fall
back gracefully to HTTP/1.0 behaviour if you suspect it's not..

The less we see of HTTP/1.0 in the relevant traffic, the less you need
to fall back on timeout..

> Browser authors needs to work on heuristics then.  Basing timing from 
> RTT to a local proxy when a remote server may be poorly connected means 
> timeouts will happen more often.

Indeed.

If the client has seen 100 Continue for the server then it SHOULD be
very aggressive about wanting to see it in future.

If it hasn't seen 100 Continue from the server then it should still
start out pretty aggressive about wanting to see 100 Continue, and tune
future requests to adopt if it's suspected 100 Continue is not
supported.

> I still feel uncomfortable about aborting without explicitly marking it 
> as such.  It may be implicit, but if not explicitly marked, that 
> precludes any possible future use-case where a 4xx response could 
> possibly not invalidate the data.

That should be implemented with an 1xx message, followed by 4xx when the
data has been received.

> Could make a SHOULD level requirement that user agents sending chunked 
> requests that choose to abort them SHOULD include a "0 ; aborted" chunk 
> extension.

I would make propose it being a MAY for 1.1, SHOULD or even MUST for
1.2.
   
> OK, I'd be happy with anything that can tell me the length.  I'm not 
> aware of why it can't be Content-Length for client requests (it's more 
> obvious to me when it comes to server responses), but if it could be 
> converted to	 a Content-Length header for relaying upstream that solves 
> spooling and flow-control issues.

It can't be Content-Length due to interactions with current
implementations looking for Content-Length to determine the message
length. Especially if there is HTTP/1.0 hops in the chain..

When using chunked encoding Content-Length != message length.

> so we still have a long way to go with browser behaviour then.

as always..

> I guess so, or at least to retry without chunking.  I guess the 411 
> would come from the HTTP/1.0 server?  How to tell that the server is 
> HTTP/1.0 if it doesn't recognise Transfer-Encoding, and just sees chunk 
> wrappers as part of the content anyway?  Would it be purely lack of 
> Content-Length?  Or would you have to rely on cached knowledge?

Most 1.0 servers probably respond either with 400 or maybe 500 or some
other error on such requests depending on the nature of how they fail.
   
> I guess this means that proxies have to start caching details of servers 
> then.

Yes.

> Problem is that caching things like HTTP versions etc unlike 
> normal caching in HTTP doesn't have the benefit of explicit cache 
> support - specified expiries, dates, etc.  There's nothing governing 
> that caching.

And it's not very likely servers suddenly downgrade.. How long or how
many servers to keep track of is an implementation detail based on the
trust in implementers to use common sense.

Regards
Henrik
Received on Tuesday, 8 April 2008 14:15:18 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:47 GMT