RE: [ISSUE-32] Implications of updates on protocol, regarding HTTP methods



> -----Original Message-----
> From: public-rdf-dawg-request@w3.org [mailto:public-rdf-dawg-request@w3.org]
> On Behalf Of Steve Harris
> Sent: 29 July 2009 23:34
> To: Paul Gearon
> Cc: public-rdf-dawg@w3.org
> Subject: Re: [ISSUE-32] Implications of updates on protocol, regarding HTTP
> methods
> 
> On 29 Jul 2009, at 20:48, Paul Gearon wrote:
> >> I personally feel that it would be a serious mistake to encourage
> >> SPARQL/Query requests to be sent as POST requests, it confuses
> >> caches (a
> >> selling point of SPARQL in enterprise environments) and gives a false
> >> impression of the scope of a SPARQL/Query operation.
> >
> > This refers back to the original SPARQL/Protocol, which has already
> > defined this behavior. It may be poor practice, but it's in the spec,
> > and I doubt there is the desire to change this. Looking at when to use
> > GET vs POST [1], then I'd like to see SPARQL/Query all done as GET,
> > and SPARQL/Update done as POST. But I don't get to change
> > SPARQL/Query, so I'm trying to work around it.
> 
> Well, the original spec is worded pretty strongly, so I think we're
> covered there. It seems clear to me that it's only OK to use POST if
> the client or server doesn't have an adequate GET implementation.
> 
> > As for GET requests that are too long, I'd prefer to see GET with a
> > message body. HTTP 1.1 does not prohibit this, though it is not common
> > practice. There aren't many conversations around it, but [2] seemed to
> > cover some of the issues.
> 
> There are HTTP servers that can't handle arbitrary GET parameters, but
> can handle GET with a body? That seems a bit of an odd implementation
> priority.
> 
> The only client I've encountered that has GET length limitations is
> XMLHTTPRequest in IE6, which I think is limited to 4kb. I've not
> really used a huge number though.

http://support.microsoft.com/kb/208427


2Kbytes.

> 
> For the record, we regularly issue 50+kb SPARQL GET queries using
> libcurl/fopen() from PHP, and whatever's available in Perl. There
> seems to be a widely held opinion that GET is often length limited,
> but I don't think that's been true for a long time.

Jetty has limits on the length of the URL.  It's about 30K (I had a report of this recently).

More seriously, because it's invisible, when going long haul, proxies can have limits.
Squid caching proxies have an internal limit on the length of the URL.

  defines.h:#define MAX_URL  8192

in the latest code (3.1.0.12).

Given the concerns on buffer overrun attacks, a fixed upper limit is going to be around.

One thing to consider is that the web server in question may have many other applications running on it so we can't just assume that the SPARQL endpoint can be configured in a specific way such as increasing the URL length limit across the whole web server.

 Andy



> 
> - Steve

Received on Monday, 3 August 2009 07:35:24 UTC