Re: OpsGL QA-commitment-group

At 09:42 AM 5/9/2003 -0600, Alex Rousskov wrote:
>On Fri, 9 May 2003, Mark Skall wrote:
>
> > I object.  The reason is that I don't accept Alex's premise.  Every
> > requirement should (MUST) be testable. (In fact, I thought this
> > statement was included somewhere in our guidelines) If a requirement
> > is not testable, it should be reworded to be testable or be
> > eliminated from the specification.  If it can't be tested, it can't
> > be verified that it was done correctly and is, thus, of no use.
> > Adding the suggested qualifier would sanction having non-testable
> > requirements.
>
>IMO, you are putting the carriage ahead of the horse. The ultimate
>goal of every specification is to facilitate compliant
>implementations, NOT testable implementations.


And how would we know they're compliant if we can't test to determine 
this?  If we're back to relying on vendors' claims, we've gone full circle.

>  Of course, we do want
>implementations to be testable because it helps us to ensure
>compliance, but it is often the case that an implementation is
>compliant but not fully testable. In other words, compliance is the
>goal, and testability is just one possible way of getting there.
>
>Requirements SHOULD be testable (not "MUST be testable"). Real
>protocol specs are full of perfectly valid requirements that can be
>implemented correctly, but cannot be tested. Here are two specific
>(but very different) examples:
>
>"Every requirement SHOULD be testable"
>         This requirement is not testable because there is no
>         pragmatic way to test for testability (in general).
>         Yet, it is a perfectly valid checkpoint for SpecGL.


Actually I don't think so.  I was just thinking about putting this 
requirement into a checkpoint and came to the same conclusion as you - it 
can't be tested for.  Thus, I believe, it should be stated in the verbiage 
as a goal to be reached.   Of course, the requirement that all requirements 
MUST include a test case description (or a test assertion) can be 
tested.  That would accomplish the same purpose;  however, it's a different 
requirement.


>"Implementations MUST ignore extension headers they do not understand"
>         This requirement is not testable using black-box
>         techniques because it is impossible to know that
>         something was ignored by a black box (in general).


I disagree.  Remember, we're talking about "testable", not proving it has 
been implemented correctly. Even the best tests only improve our confidence 
level in the fact that the implementation is correct.  We certainly can't 
prove that it's correct, but, it seems to me, we can probe to find out 
whether certain states have changed. If we can't determine state changes 
through standard functions (I presume that's going to be your 
counter-argument), one can determine how those state change have impacted 
other functions. This would obviously not be as efficient and may only 
result in "success" (determining non-conformance) in a small percentage of 
tries - but that's what falsification testing is all about.  In any case, 
if your example can't be tested, it should not be a requirement.

>         Yet, it is a useful requirement that is possible
>         (and often easy) to implement correctly.


So vendors will say they conform but we can't determine if they're telling 
the truth?  What good is that?


>Here is a list of currently untestable proxy-related MUSTs in HTTP/1.1
>(RFC 2616). You will see that many (not all!) of the listed
>requirements are not testable at all, cannot be rephrased to become
>testable, but it is still possible to implement them correctly.


Again, it may be possible to implement correctly, but it still is 
impossible to know if it has been.

>http://coad.measurement-factory.com/cgi-bin/coad/GraseInfoCgi?info_id=test_group/any/requirement-testable-not
>
>Until all specs and software is written using formal and verifiable
>languages, we have to do what the real world does: presume innocent
>until proven guilty. If you cannot prove an implementation guilty, you
>cannot claim it is not compliant just because some requirements are
>not testable.

Again, we can usually come up with some test, no matter how 
inefficient.  If we can't come up with any test, the requirement is a bad 
requirement.  It's not a question of innocent or guilty.  If the grand jury 
doesn't indict, innocence or guilt is moot.  We should never have to guess 
at innocence or guilt.  However, if you are charged, there will be tests 
(questions, cross-examination, re-direct) to determine the outcome.


>You have to assume it is compliant (for all practical
>purposes anyway). Since compliance is the ultimate goal, testability
>becomes a SHOULD not a MUST.

Since compliance is the ultimate goal, every requirement needs to be 
testable to ensure compliance.

>(Of course, as recent real world events
>demonstrate, it is often convenient to presume guilt in order to
>conduct the "tests" on weak subjects, but that loophole does not help
>in software world because if something is not testable, our
>presumptions will not change that status).
>
>Alex.
>
>--
>                             | HTTP performance - Web Polygraph benchmark
>www.measurement-factory.com | HTTP compliance+ - Co-Advisor test suite
>                             | all of the above - PolyBox appliance

****************************************************************
Mark Skall
Chief, Software Diagnostics and Conformance Testing Division
Information Technology Laboratory
National Institute of Standards and Technology (NIST)
100 Bureau Drive, Stop 8970
Gaithersburg, MD 20899-8970

Voice: 301-975-3262
Fax:   301-590-9174
Email: skall@nist.gov
****************************************************************

Received on Friday, 9 May 2003 14:29:19 UTC