Re: Knowing which tests are in the repository

On Thursday, August 22, 2013 at 11:41 PM, Dirk Pranke wrote:
> On Thu, Aug 22, 2013 at 2:27 PM, Kevin Kershaw <K.Kershaw@cablelabs.com (mailto:K.Kershaw@cablelabs.com)> wrote:

> > 2) Do we want to require the use of a manifest as a source-level object in the repo (rather than something that could be generated via a build process)?
> > 
> > Speaking of metadata, our experience is that keeping metadata inside the test files (e.g., the html files) themselves is the best way to keep that test and metadata in sync. If a separate manifest file is needed to support test runtime, then that should be generated from the test files with an automated tool. Candidate metadata we've seen mentioned in this thread or would like to propose are: Test ID (file name); test timeout; test type (harness/ref/manual/etc); nontest files (e.g., helper files); spec references.
I strongly agree with Dirk, here. Manifests get out of sync, all the time.

The list of metadata you mention can pretty much be inferred from a combination of the test content and the filename, which is what we should do.

Let's avoid the situation were we have a reftest with user-inputed metadata which which claims to be a manual test.
> > 3) Do we want to allow tests that are specified only in a manifest (e.g., tests with query parameters) rather than being initiated from a non-manifest file?
> > 
> > I may be misunderstanding the intent of this question - sorry if so. I took it to mean providing an environment where a test developer can write and run a test w/o manifest info. I think that option should be supported. Manifest constructs are often better suited for running lots of tests in an automated fashion and get in the way during individual test development.
> I was referring specifically to James' examples early in the thread, where a manifest would specify things like "test2.html?foo=bar" indicating that you should run test2.html and pass the query string in.

It's would be tremendously beneficial to this conversation to discuss this in relation to the real example we have in the repo. Could someone in the know point to those tests and to the query that are supposed to be run with them?
> 
> However, I think you raise a valid question: if we did support manifests, would we require that all tests be in a manifest? Here there's two possible interpretations: you can't run a test at all if it's not in the manifest, or you can run the test interactively, but a test harness/runner might ignore it (and thus we might want a commit check to enforce this). Or, of course, being in the manifest could be strictly optional. 

We want to avoid test never getting run because they weren't added to the manifest (or they were, but the url has a typo).

If we end up wanting to avoid running specific tests, I suggest tying that to the versioning system or the issue tracker (e.g. provide an option to not run files that have an issue opened against them).

--tobie

Received on Thursday, 22 August 2013 22:10:55 UTC