RE: HTTP/1.1 : Chunking

-> -----Original Message-----
-> From: Adrien de Croy [mailto:adrien@qbik.com]
-> Sent: Thursday, January 29, 1998 11:37 PM
-> To: Ben Laurie
-> Cc: http-wg@cuckoo.hpl.hp.com
-> Subject: Re: HTTP/1.1 : Chunking
-> 
-> 
-> 4.2 Message headers
-> 
-> The allowing of multiple headers if their values form a 
-> single list seems a
-> redundant complexity.  Unless you are catering for people 
-> debugging by hand
[...snip...]
-> 
-> Personally I would remove allowing this from the spec.  I 
-> don't believe any
-> client software would implement it (unless they are 
-> programming in Pascal or
-> something with a string length limitation - UGH!).
-> 
Does the spec allow you to collapse multiple headers into
a single line? If so, you can put them in your name/value
list just the same.
-> 
-> Proxy Authentication.
-> 
-> There are some problems with the proposed method where it 
-> relates to chained
-> proxies, or a "use proxy" response.  Basically it may be 
-> necessary for a
-> client to include many Proxy-Authorization fields in a 
-> request, since it is
-> possible that only the client holds the credentials of the 
-> proxies involved.
-> Otherwise ALL agents in the chain (including the end server) 
-> MUST support
-> persistent connections.
-> 


-> If it needs multiple authorisation fields, how can each 
-> proxy tell which one
-> is intended for it.  It would have to try them all for a 
-> match, strip it
-> out, and pass the request on.
-> 
Huh? (put 305 aside for the moment)
If the client is going through multiple chained (auth requesting )
proxies then it must include credentials for each of them.
As a side affect, each credential is readable by all
proxies in front of it.
The proxy knows which credential set is for it
by looking for the realm value that it generated.

As a former proxy implementor, I am happy to agree that
the proxy-auth system is a bit ugly and could use some work.
A proposal to include more specific realm info (for the chained
 case) was rejected because it could cause backward compat problems.

-> Multiple byte-range requests?  
-> 
-> Normally I guess these would only be made by caches, as clients would
-> generally always need the end of a file, rather than chunks 
-> out of the
-> middle of it.  Are overlapping byte ranges meant to be 
Not really.  If a client is doing pipelined range requests,
it may very well, retreive bits of an entity at a time,
ie bytes 
1) 1-100 of entity A
2) 1-100 of entity B
3) 1-100 of entity C
4) 101-200 of entity A
5) 101-200 of entity B
6) 101-200 of entity C
to affect a sort of multiplex...  This is critical
for user experience to perform well with only 1
or two connections..

-> condensed by the
-> server? or supplied as requested.
-> 
-> Since the protocol is 
-> changing, why not make
-> it REAL easy to validate cache files by assigning every 
-> cachable resource a
-> globally unique identifier which changes with modifications 
-> to the resource.
-> Then a validation would simply involve sending the identifier
-> Say in the form of URL: File Version, or system file time.  
-> This would
-> completely remove the complexity of validation with servers, 
-> and caches
-> could still use freshness concepts for efficiency.  It would 
-> also remove the
-> dependency on synchronised clocks etc.
->
Doesnt Etags meet this requirement?

further, be careful saying that "since the protocol is changing"..
No, the protocol isnt changing necessarily.  
We are bound that all changes must be backward compatible.
Additionally, we are past the time for new features to be added
in the protocol. (at least in v1.1).  You might look to the extensions
working group for a mechanism to plug in new features...
that list is ietf-http-ext@w3.org

---
Josh Cohen <josh@microsoft.com>
Program Manager - Internet Technologies 

Received on Friday, 30 January 1998 03:26:55 UTC