- From: Joel Young <jdy@cs.brown.edu>
- Date: Mon, 05 Nov 2001 10:32:35 -0500
- To: www-lib@w3.org
- cc: "Bang, Steinar" <Steinar.bang@tandbergtv.com>, jdy@cs.brown.edu
> From: "Bang, Steinar" <Steinar.bang@tandbergtv.com> > Date: Mon, 5 Nov 2001 09:36:56 +0100 > I just gave libcurl a look. If I understand it > correctly, libcurl is blocking? Ie. the request > for an URL doesn't return until the data or an > error response is received. Is this correct? Yup. I believe so. > My primary reasons for using libwww, was > 1. its event-driven single thread operation. > 2. the stream building response, which let the > application parse and use data before they are > completely received I have no doubt that wwwlib is a more powerful system. The problem for me is it was too complicated and the stream flow was ill documented (IMHO). I would insert (or remove ) a stream handler and there would be side effects all over the place. I also got very tired of dealing with the massive global state in libwww. BTW, curl/libxml allows you to parse pages in chunks as they arrive also. Everytime I had a question it would go unanswered on the list. Active libwww development has died and I don't think it is fit for new projects. It needs to be shattered and rebuilt as a large set of maximally independent components rather than a monolithic/incestuous application framework. I had a problem with the URL parsing and escaping code in libxml and within 36 hours I had a patch into CVS and in 3 days a new fixed version was released. Do you see this happening with libwww? > Hmm... I'm wondering how they do such things as > keeping an HTTP 1.1 connection open across requests? I don't know how (cause I haven't read that part of the source), but curl does as long as you use the same curl object. By the way, when I think about what a www library should be capable of I think of libwww. It is the mark which the others are measured against. Joel
Received on Monday, 5 November 2001 10:42:39 UTC