W3C home > Mailing lists > Public > public-rdf-dawg-comments@w3.org > December 2012

Re: Possible Bug in SPARQL 1.1 Protocol Validator

From: Gregory Williams <greg@evilfunhouse.com>
Date: Wed, 19 Dec 2012 14:44:03 -0500
Cc: <public-rdf-dawg-comments@w3.org>
Message-Id: <4A5CE5AE-42B8-4061-8ED5-39841A165EB3@evilfunhouse.com>
To: Rob Vesse <rvesse@dotnetrdf.org>
On Dec 19, 2012, at 2:28 PM, Rob Vesse wrote:

>> Where are you seeing the double encoding? I'm able to take that POST
>> line, run it, and see this on the server side:
>> ------------
>> POST 
>> /?using-graph-uri=http%3A%2F%2Fkasei.us%2F2009%2F09%2Fsparql%2F%20data%2Fd
>> ata1.rdf&using-named-graph-uri=http%3A%2F%2Fkasei.us%2F2009%2F09%2F%20spar
>> ql%2Fdata%2Fdata2.rdf HTTP/1.1
>> TE: deflate,gzip;q=0.3
>> Connection: TE, close
>> Host: localhost:8881
>> User-Agent: libwww-perl/5.834
>> Content-Length: 27
>> Content-Type: application/x-www-form-urlencoded
>> update=sparql+update+string
>> ------------
>> Do you believe this is wrongly encoded? Given that there are several
>> implementations passing the protocol tests using this validator (I know
>> of ones in perl, java, and c++), I believe the problem may lie elsewhere.
> It's my best guess at what the problem might be given that I have
> eliminated all other obvious explanations to the best of my ability.  To
> clarify I have done the following:
> 1 - Running the command sequences manually through my web UI - All Pass
> 2 - Running the command sequences in those tests using CURL - All Pass
> (See 
> https://bitbucket.org/dotnetrdf/sparql11-protocol-validator/src/tip/protoco
> l.sh?at=default)
> 3 - Running my Java ports of those tests - All Pass
> 4 - Running unit test versions of the command sequences I.e. eliminating
> any protocol interaction and adjusting the commands to add the USING/USING
> NAMED statements that the  protocol should be adding - All Pass
> While I could have ported the tests incorrectly once, four times starts to
> seem a little unlikely, once I could do something dumb but believe me I've
> spent a lot of time staring at these tests already.  So either the test
> harness is bad or my implementation is bad (or I really suck at copy and
> paste), given that I can get the tests to run successfully in four other
> ways I tend to lean towards some oddity in the test harness.

I agree that seems strange, but that leaves us with several ways in which your system is passing, and several other implementations that all work just fine with the harness.

> It may be double encoding or perhaps the tests that the harness runs
> aren't exactly the same as the tests as documented in the ReadMe (which as
> far as I can see is not the case)?
> Debugging this with the official harness is a PITA for me because I can't
> debug my live implementation using the public instance of the test harness
> and since I can't get the test harness to install and run locally yet I am
> rather stuck.
> I am not ruling out a bug in my implementation but it's hard to know where
> to look given all my ported versions of the tests pass and the difficulty
> of quickly running the tests in a usable debugging environment for my
> implementation.

Well, the implementation report is based entirely on self-reported results. It sounds to me like you've done due diligence on ensuring that your implementation is in conformance with the spec and the tests as written, and works with your client code. At this point, I think I'd suggest simply submitting new EARL results indicating all passes and marking the tests at issue with [ earl:mode earl:manual ].

Received on Wednesday, 19 December 2012 19:44:26 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:52:13 UTC