- From: Gregg Kellogg <gregg@greggkellogg.com>
- Date: Fri, 6 Sep 2013 08:35:10 -0700
- To: Markus Lanthaler <markus.lanthaler@gmx.net>
- Cc: <public-linked-json@w3.org>
On Sep 6, 2013, at 5:58 AM, Markus Lanthaler <markus.lanthaler@gmx.net> wrote: > On Thursday, September 05, 2013 4:48 PM, Dave Longley wrote: >>> I think the whole purpose of these tests is to test that an >>> implementation is capable of making real network requests. >> >> I think it's perfectly fine for there to be an abstraction layer between >> the network and what remote documents are returned. I don't think we're >> trying to test the network stack here, rather we're trying to ensure > > Well not the network stack per se but the whole end-to-end scenario. Most > algorithmic tests could be described as unit tests whereas these here are > really functional tests or even integration tests IMO. > > It's similar to testing a browser. You can test it's rendering engine but > you certainly also wanna be sure that it can issue the right network > requests to fetch data from a remote server. This could be done by mocking the appropriate HTTP library to provide the appropriate responses. For everything but the remote-data tests, this is pretty straightforward. >> that remote documents that are loaded with particular parameters produce >> appropriate output. Think of it as if a different "HTTP adapter" is used >> for testing. The test is not for the adapter, it's for what it returns. > > Right, we are not testing the adapter in isolation but the whole system > including the adapter. Every conformant JSON-LD processor must include an > adapter that is able to make real network requests in order to be called > conformant. Of course there may be some restrictions (CORS) but at least > under optimal conditions it must work. Yes, if the processor outside of a test scaffold can't access remote documents at all, then it can't be conformant. >>> As such, I think mocking them >>> wouldn't be a good idea. In practice, you probably run the code in an >>> environment where it doesn't have network access, but nevertheless it > MUST >>> be able to perform those requests. >> >> I don't think that we should say that a processor with no access to >> json-ld.org is incapable of being considered compliant. It certainly >> wouldn't be able to prove its compliance if such access were required. >> The processor might do everything just fine w/respect to loading remote >> documents, but be unable to access *particular* remote documents. > > Well, we could also test against localhost if you prefer but that doesn't > mean that it should be allowed to replace the complete HTTP stack with a > simple mock for these tests. This test validate a very important aspect of > the functionality of a JSON-LD processor and thus we need to test the HTTP > stack as well IMO. Localhost would not be too useful, and would require substantial changes to the test for relative IRI resolution. I think that if a developer wants to mock HTTP to pass the tests in their environment, and is certain enough that this satisfies the requirements of the test suite, then they can make that assertion in an EARL report. These things are entirely self-reporting, so we trust that they are doing the best they can. Other than for the remote-document tests, all mine are done through mocks at an appropriate level. If we add more information to the remote-data manifest to objectively allow a test running to simulate the HTTP responses, then I think that would be useful. As a developer, I really like to be able to control my connections, so that I can continue to operate if my network is down, or I'm in someplace where its not accessable. I've used a Ruby HTTP caching gem in many cases, which has the advantage of actually performing each HTTP request, but locally caching the response to avoid a subsequent get. Gregg > -- > Markus Lanthaler > @markuslanthaler > >
Received on Friday, 6 September 2013 15:35:40 UTC