"objective" clarified

Hi everyone

Glad to see such lively discussion around objective.   

As mentioned in a previous post -- Please don’t assume that when the
group on the telecon comes to some position or makes a suggestion that
they are all bonkers.

If it doesn’t make sense to you and you weren't there for the
discussion, please ask questions.     


 
For reference here is what I posted as the "definition" of objective.

"If 80% or more of people who have knowledge of the relevant tech and
test methods would agree in their judgment " 




Couple of notes to (hopefully) make things clearer:

1)   The 80% or better agree is not a definition of objective but a test
of it.  In my posting I called it a definition -- my mistake.   I meant
it to be how we identify or screen for items that are objective or not.

2)  If a measure is objective then people who understand how to use the
measure should come up with the same answer. If not - it is not
objective.  We should not measure objectivity any other way I don’t
think.

3)  It is important to qualify the people used in this test (or any
test).  If people do not understand the language used to describe the
measure or have no idea what the web is then the answers they give will
be random.   That is not a test of the measure.  Hence the people who we
use to test for objectivity should have knowledge of the relevant tech
and test methods.  

If the instrument is to be used by ordinary people, then the test of
usefulness must be done with ordinary people --- not experts.   What I
posted did not include the word experts.    Again, if one has a question
or concern about what the phrase " people who have knowledge of the
relevant tech and test methods" means then that is what should be asked.

The answer would have been:
When any testing or experiment is done, the testers must be clearly
identified and not selected arbitrarily or subjectively.   If the tool
is to be used by ordinary people, then the test should not be done with
experts since it would give you a tool that was useful only to experts.
This is mixing usefulness with objectivity but it is important or else
you end up with a tool that yields useful and repeatable results only
with experts and not with your intended audience.  So we would have to
be careful to select people who are representative of our target
audiences and then provide them with information (such as our techniques
doc) that can give them enough background to understand the question.





4)  There is never 100% agreement when measurements are taken.  There is
ALWAYS error.  So we can't say 100% agreement.     When I proposed this
it was originally 90%.   In order to make it a little easier to get
things included I reduced it to 80% when I proposed it most recently.
I was concerned that we not get too tight here or we may guarantee
failure.  If people think it should be higher.. that is fine.  This was
just the best we came up with and posted to the list for discussion.


Please note:   The 80% has nothing to do with the number of people who
think a guideline should be in our out of the document or any other
decision. It is just the number of people who take a test (who know the
test material) that should come up with the same answer to the test
questions before we consider the test question good test questions.   In
this case, the test questions are the success criteria which are to be
used to eval conformance with a guideline.

There seemed to be some confusion around this on the list.   

Thanks

Looking forward to your comments -- and joining us on Thursday

Gregg

Gregg writing from Belgium  (for any of you who look at time stamps and
wonder why I'm writing at all times of the night.  My computer clock is
all whacky)

-- ------------------------------ 
Gregg C Vanderheiden Ph.D. 
Professor - Human Factors 
Dept of Ind. Engr. - U of Wis. 
Director - Trace R & D Center 
Gv@trace.wisc.edu <mailto:Gv@trace.wisc.edu>, <http://trace.wisc.edu/> 
FAX 608/262-8848  
For a list of our listserves send “lists” to listproc@trace.wisc.edu
<mailto:listproc@trace.wisc.edu> 

Received on Monday, 3 December 2001 02:56:47 UTC