W3C home > Mailing lists > Public > public-rdf-dawg-comments@w3.org > December 2012

Re: Possible Bug in SPARQL 1.1 Protocol Validator

From: Rob Vesse <rvesse@dotnetrdf.org>
Date: Wed, 19 Dec 2012 11:46:10 -0800
To: Gregory Williams <greg@evilfunhouse.com>
CC: <public-rdf-dawg-comments@w3.org>
Message-ID: <CCF75940.1A9D9%rvesse@dotnetrdf.org>
Ok, I will go ahead and do that

I will still try and get the harness running in my environment to see if I
can track down what the issue is and will let you know if and when I find
out whether the cause was the test harness or some bug in my implementation


On 12/19/12 11:44 AM, "Gregory Williams" <greg@evilfunhouse.com> wrote:

>On Dec 19, 2012, at 2:28 PM, Rob Vesse wrote:
>>> Where are you seeing the double encoding? I'm able to take that POST
>>> line, run it, and see this on the server side:
>>> ------------
>>> POST 
>>> ql%2Fdata%2Fdata2.rdf HTTP/1.1
>>> TE: deflate,gzip;q=0.3
>>> Connection: TE, close
>>> Host: localhost:8881
>>> User-Agent: libwww-perl/5.834
>>> Content-Length: 27
>>> Content-Type: application/x-www-form-urlencoded
>>> update=sparql+update+string
>>> ------------
>>> Do you believe this is wrongly encoded? Given that there are several
>>> implementations passing the protocol tests using this validator (I know
>>> of ones in perl, java, and c++), I believe the problem may lie
>> It's my best guess at what the problem might be given that I have
>> eliminated all other obvious explanations to the best of my ability.  To
>> clarify I have done the following:
>> 1 - Running the command sequences manually through my web UI - All Pass
>> 2 - Running the command sequences in those tests using CURL - All Pass
>> (See 
>> l.sh?at=default)
>> 3 - Running my Java ports of those tests - All Pass
>> 4 - Running unit test versions of the command sequences I.e. eliminating
>> any protocol interaction and adjusting the commands to add the
>> NAMED statements that the  protocol should be adding - All Pass
>> While I could have ported the tests incorrectly once, four times starts
>> seem a little unlikely, once I could do something dumb but believe me
>> spent a lot of time staring at these tests already.  So either the test
>> harness is bad or my implementation is bad (or I really suck at copy and
>> paste), given that I can get the tests to run successfully in four other
>> ways I tend to lean towards some oddity in the test harness.
>I agree that seems strange, but that leaves us with several ways in which
>your system is passing, and several other implementations that all work
>just fine with the harness.
>> It may be double encoding or perhaps the tests that the harness runs
>> aren't exactly the same as the tests as documented in the ReadMe (which
>> far as I can see is not the case)?
>> Debugging this with the official harness is a PITA for me because I
>> debug my live implementation using the public instance of the test
>> and since I can't get the test harness to install and run locally yet I
>> rather stuck.
>> I am not ruling out a bug in my implementation but it's hard to know
>> to look given all my ported versions of the tests pass and the
>> of quickly running the tests in a usable debugging environment for my
>> implementation.
>Well, the implementation report is based entirely on self-reported
>results. It sounds to me like you've done due diligence on ensuring that
>your implementation is in conformance with the spec and the tests as
>written, and works with your client code. At this point, I think I'd
>suggest simply submitting new EARL results indicating all passes and
>marking the tests at issue with [ earl:mode earl:manual ].
Received on Wednesday, 19 December 2012 19:47:27 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:52:13 UTC