Re: Opera's SSE Tests on GitHub [Was: <different topic>]

On 11/27/2012 10:21 AM, Robin Berjon wrote:
> On 26/11/2012 23:23 , James Graham wrote:
>> On Mon, 26 Nov 2012, SULLIVAN, BRYAN L wrote:

>>> And PHP is only one of the server options (and a poor man's one at
>>> that, AFAICT). Shouldn't we be testing this with different server
>>> environments?
>> Not obviously. I mean as long as we have one server that correctly
>> fulfills its side of the spec bargin that is sufficient for our
>> purposes; the goal is to ensure that the clients implement the spec
>> correctly, not determine the relative merits of PHP vs node.js vs
>> twisted vs whatever for the server side.
> Agreed, we're not trying to assess the correctness of a given
> server-side implementation of SSE (though we could possibly have a TS
> for that, too). So just one server implementation should be enough (so
> long as it behaves predictably and in the way we want).
> That said, there may be a case to be made that Apache+PHP is not the
> best testing environment. The problem with both of these is that they
> add a fair bit of magic and automatic behaviour, and that can get in the
> way of testing and might lead to false positives/negatives. For
> instance, we can't seem to figure out how to get Apache to let one of
> our PHP scripts to handle OPTIONS requests and that's rather annoying.
> It's also difficult to get Apache to return a broken HTTP response,
> which can also be useful in testing. A bare-bones, roll everything up
> yourself test server could help here.

Well since we seem to be having this conversation here anyway, I may as 
well continue soliciting for requirements for such a server. So far I 
know about the following:

* Must be possible to deploy on individual test machines on a variety of 

* Must be possible to deploy on a central server handling requests from 
multiple test machines

* Must have sane default behaviour for e.g. serving files

* Must be able to produce arbitrary responses (any headers/body, 
including non-conforming combinations) specific to individual tests.

* Must be able to have long-lived server processes for e.g. SSE tests

* Must be able to have per-test per-client state on the server for e.g. 
progress events tests

There are also some nice to haves e.g.

* Make it easier to control response headers than the generic mechanism 
for full control.

This is a pretty tough list already but it would be good to know what is 

My thinking so far is that the solution could look like a custom server 
written in Python (this is a blessed language at Mozilla aiui, seems to 
already be a requirement for Google to run tests, something that Opera 
are happy with, and is as close as anything to meeting the 
cross-platform requirement). Requests could specify python handlers via 
the path (somewhat like pywebsockets) e.g. 
/tests/foo/ would use with bar acting 
as extra input. The API would be a set of handler functions like 
write_response(request) (full control over the response), 
write_headers(requets) (full control over the headers) and 
write_body(request, response_headers) (full control over the body). Some 
useful built-in functions could be provided so that for simple cases 
writing everything isn't needed. There would also need to be some way to 
do incremental writing, so giving access to a handle to the output 
stream is going to be needed.

To be clear this is very early thinking, and I need to study what 
existing solutions e.g. the javascript based server that Mozilla use do. 
So far I haven't worked out a really good way of handling persisting 
state in the face of multiple clients. I also don't know what the best 
high level design for the server is (threads, processes, evented  
although I am somewhat against evented on the basis that would require 
non-blocking versions of functions like sleep()). So feedback is 

Received on Tuesday, 27 November 2012 10:23:31 UTC