[Owasp-cert] Philosophies on Exam Creation

Gary Palmer owasp at getmymail.org
Sun Jul 20 02:19:50 EDT 2008


First, I, like J Oquendo, tend towards lengthy posts, for those who like
short notes, sorry, I have lots to add.  -- Gary
 
I do not have strong feeling I am aware of, so to help debate the topic so
we become aware of the options and hopefully make a better choice, let me
play some alternatives to those James suggests.
 
1- Question that ask "which of the following do NOT..." are good because
they help identify if a user can understand a grouping of items.  For
example, if you were proving to me that you could be an auto mechanic and
had to pass a test, a reasonable question might be "which of the following
parts are not found on a commercial car?  1) actuator, 2) fuel injector, 3)
throttle, 4) 30mm Cannons"  But the choice, being ridiculous, make it a
simple solution and a useless question.  But if you trade #4 with, "sump
pump" or "bilge pump" you might catch a few who do not know the parts.  Then
the argument is "cars are so complicated, how should someone be expected to
know all the parts", but we are asking them to prove knowledge, not ability
to Google.  When all the choices sound good it is a tough test to find the
incorrect one.  That is why many intelligence tests ask you to pick the one
that does not belong with the others.
 
On the other hand, this type of question is harder to make work and easier
to mess up in it's creation.  Having a bad question is no helpful either.
This would indicate that a "plea" system (like CISSP or SANS) could help, if
you feel a question is unfair or confusing, then you have a venue to argue
your case.
 
2- Fine distinction is helpful when it is necessary.  If we were testing for
a power generator operator position and the system is designed to work
properly unless the output exceeds 5 gigawatts, then a reasonable test
question would ask "under what output will the system work properly? 1) 1
gigawatt, 2) 3 gigawatts, 3) 5 gigawatts, 4) 7 gigawatts, 5) 75% or the
maximum value displayed on the plate mounted on the control panel."  I agree
that the questions that require simple analysis of the English language are
not testing topic knowledge and would have no place here.
 
3- Are you saying that all questions and the answers to each question should
be randomized?  Or maybe groups of related questions should be together?  I
like the idea of groupings, like all questions about network intrusion are
groups together randomly sequenced and answers to each questions randomly
ordered.  We can even randomize the order the groups appear, but I do think
groupings is useful to help people allocate time for the test.  That begs
another question, will test takers be allowed to go back to prior questions?
Will the test be administered on paper or electronically?
 
4- I think you are saying that the number of questions on the test should be
drawn from a pool 3 times larger.  I would agree, but are you thinking of
multiple tests or differing versions?  If there are groupings does the 3X
rule apply at each group?
 
Now with all that said, there are a couple other things to ferment:
There should be a "normal distribution" between easy and difficult
questions.  This bell curve slides right or left until we are getting a
desired percentage passing the test.  If everyone passes, the test is too
easy and the credential loses credibility.  If the test is too hard, fewer
people take it and the test becomes elitist and distained.
 
What would be a good "failure" rate?
What would be a good duration?  (4-6 hours absolute max, people burn out and
get intimidate!)
What is a good number of questions?
What topic areas?
 
Remember that for each test is published volumes of test preparation and
test taking material.  That material tends to break topics apart which is
another argument for topic groupings.  I believe the topic areas should be
first defined and agreed upon.
 
So with all this said, I think my response comes to one of a process.
Before developing a test, I recommend we:
1- define topic areas to be tested.
2- define test format guidelines (question-multiple answer (4 or 5 choices),
essay, matching, true-false, etc)
3- define test administration process
4- define question petition process and format
5- define teams to develop questions and secondary review/revision
 
I would prefer to see some hands on, but I despise (yes, that strong) those
tests that say "select the command" and the difference is syntax.  To me,
that is the purpose of /? but that does make it very hard and if you ask for
a Cisco command, what about all the other firewalls, switches, and routers?
I would love to see a cert based on experience, but experience is so broad
and we have to quantify what to test.  Unfortunately, that translates to
being able to define what we will test and that distills into a book (Hmmm,
now I go in circles).  So while testing should get rid of those who just
memorize, we cannot afford to.  I have worked in information security for
over 13 years (add another 13 prior to that working as a system designer and
OS developer) and I have some good skills, but a test has to be fair unless
we say "must have experience" and that becomes subjective.
 
So the operative question is not "how subjective should the test be?"  The
more objective the easier to write an "answer" book.  The more subjective
the harder to pass and more difficult to justify passing someone who has
very unusual experience.
 
I guess I am looking at a top down approach.  Again, I am not trying to
counter what James states, just offer another perspective and ask for
consideration of several other very relevant topics.  Thank you for your
time.
 
Cheers,
Gary Palmer
 
-- no pithy saying at the moment...

  _____  

From: owasp-cert-bounces at lists.owasp.org
[mailto:owasp-cert-bounces at lists.owasp.org] On Behalf Of
james at architectbook.com
Sent: Saturday, July 19, 2008 7:01 AM
To: Owasp-cert at lists.owasp.org
Subject: [Owasp-cert] Philosophies on Exam Creation


Figured that it may be useful for each of us to share our own philosophies
on exams. I'll start this one off...

1. I really hate questions that are asked in the negative such as: which of
the following are NOT used to secure applications.

2. I get equally annoyed with questions where the answers are too close as
they tend to imply either indoctrination or are more of a test of
understanding the English language that the subject matter.

3. I do believe that the exam engine should not only randomize questions in
terms of order but also answer choices as well.

4. I also believe that as a goal, we should strive to have a pool of
questions that is three times as large as the actual questions asked.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.owasp.org/pipermail/owasp-cert/attachments/20080719/a66ba573/attachment.html 


More information about the Owasp-cert mailing list