- From: Steinar Bang <sb@dod.no>
- Date: Mon, 12 Nov 2001 16:27:59 -0500 (EST)
- To: Joel Young <jdy@cs.brown.edu>
- Cc: www-lib@w3.org
>>>>> Joel Young <jdy@cs.brown.edu>: > I have no doubt that wwwlib is a more powerful system. The problem > for me is it was too complicated and the stream flow was ill > documented (IMHO). I agree. At least: how things actually work in real code is badly documented. However, the _idea_ of stream flows is well documented, and could serve as a master plan for a libwww rewrite. > I would insert (or remove ) a stream handler and there would be side > effects all over the place. I haven't had the side effects, but then what I did, was to build the idea of the stream flow in C++, and hook my own stream handlers up to MIME types in libwww. What I have seen, though, is things going down the wrong streams, because the incomprehensible way streams are chosen. > I also got very tired of dealing with the massive global state in > libwww. I agree, here as well. > BTW, curl/libxml allows you to parse pages in chunks as they arrive > also. Ah, OK. [snip!] >> Hmm... I'm wondering how they do such things as keeping an HTTP 1.1 >> connection open across requests? > I don't know how (cause I haven't read that part of the source), but > curl does as long as you use the same curl object. Hm... that means that one would have to build some kind of support to track the lifetimes of HTTP 1.1 connections, across calls to the libcurl API. > By the way, when I think about what a www library should be capable of > I think of libwww. It is the mark which the others are measured > against. I agree. And the overall design and ideas aren't bad. It's just hard to work around some of the limitations, such as having to hack central header files to add new HTTP commands and status codes for WebDAV.
Received on Tuesday, 13 November 2001 03:00:00 UTC