Test-Taking Tips

March 8th, 2010

Back on line after re-doing both my websites. Check out The Examiner and Examiner Express. But I digress…

You would think that as an assessment tool developer I wouldn’t be the kind of guy to help someone “beat the system”.  You know… those little tricks that let you (or your students) “game” the system.  I prefer to think of it as payback to those developers that don’t take the time to do it right.  (How to Do It Right:  How to Write a Test Item)

Last week I was visiting Underwrite Labs down in Chicagoland.  While I was training them in on the latest release of The Examiner we fell to talking about assessment development.  One of the folks there had this great link:

Tips For Taking A Test

(That’s a link…click on it!)

Check out all the different suggestions under the bullets at the top of the page.  I’m going to be printing this out for my beloved who’s fourth-grade class is going to be taking some of those miserable, interminable, why-are-they-doing-this-to-us tests next week.

What you should be doing after you develop your assessment is to look at it in light of these tips.  If using them gives away the answers, it’s back to the drawing board for you!  Remember that a great tool is giving the assessment to someone who doesn’t know the subject and seeing how they do.  If they do better than random chance (say, 25% for a 4-alternative multiple choice assessment) you had best fix things up.

Trust, But Verify

January 20th, 2010

I may have mentioned it before, but a modification of an old adage applies quite well to developing assessments:

No assessment ever survived first contact with an examinee.

You may think you’ve tested things out, but until you complete two very important steps you’re leaving yourself open to all sorts of problems.  Here’s the first thing you need to do:

Verify Your Key

Take your answer key and answer the test with 100%, 50%, and 25% of the answers marked in correctly (no, I didn’t say “Incorrectly”).  Score the test.  Look at your test results and make sure that the records system you’re using shows 100%, 50%, and 25%.  If it doesn’t you need to check your key because something is obviously amiss.

Most folks do this, fix up the problems they’ve seen, and call it a day.  This is not a good idea.  Not doing this step can even cost you big bucks.

Verify Your Assessment

This is different from validating your key.  All that does is take what you think is the correct answer and make sure that the assessment is being scored to match that correct answer.  This ignores one very important issue: what if what you have marked as the correct answer isn’t the correct answer!

Here’s what to do: give a copy of the assessment to someone who should know all the answers.  Give a 4th grade arithmetic standards assessment to a 7th grade teacher.  Score the assessment and look at the results.  Unless the 7th grade teacher is a complete dolt, that assessment better come back with a score of 100%  If it doesn’t you had best go back and figure out what went wrong.  Either the teacher just made a mistake and marked something wrong or, and more critical, you don’t have the correct answer marked.

OK, I hear you say…  I know I made a decent assessment…why do I have to go through that step?  Let me tell you a little story about a company that paid Big Time for not doing that final, simple, step.

First off, check out this article in the NY Times:   http://www.nytimes.com/2006/03/10/education/10sat.html

The company doing the assessment didn’t do that last step and it cost them a lot.  They could have spent two or three hours running a knowledgeable person through the assessment to make sure the answers were OK, but they didn’t.  And, as you can see in the article, they paid the price.

So, bottom line, make sure your answer key is ok but also make sure that the assessment that cranked out that answer key is ok too.  Failing to do that one simple step can come back to bite you!

Look! It’s an Iceberg!

January 16th, 2010

A break from assessment discussions over to programming.  I’m in the middle of a classic “iceberg” programming task.  Ever heard of that?  It’s where something that looks pretty simple on the surface has a ton of obnoxious code underneath it.

Case in point:  I’m adding a feature to the main Examiner system where you can create an assessment and only score the top “N” items.  You’ve all seen those:  “There are 20 items in this test.  You only need to answer 15.  You can answer more, but you’ll only be scored on the highest-scoring 15″.  Sounds simple, right?

First off… I’ve got to fold this into my existing scoring system without breaking anything.  That’s not too hard as all this code is pretty much isolated from my regular scoring.  Now it gets messy.  Do I figure mastery based on percentages?  Do I count in the percentage ALL the items or just the ones that were answered.  Do I simply set an absolute mastery point?

Next… how to report things.  I want to show all the items, how do I differentiate one from the other.

Case studies (scenario questions).  Do I treat each item in there as a single item or do I treat the whole item as a single entity for scoring purposes.

Finally… what about multi-part tests?  What if some of the sub-tests use the top-N method and others don’t.

*woof*  Each of these decisions leads to piles of code.   AND piles of QA testing.

So, when you see some Neat Little Feature in a program, any program, you’re often seeing the tip of a giant iceberg that’s below the surface where all the magic really happens.

OK.  Back to programming with a break to see Avatar this afternoon!

Creating Multiple-Choice Items

January 11th, 2010

Multiple-choice test items are without a doubt the most popular test item.  While it is very easy to write a bad item, it’s a lot more difficult to write a good one.  Coming up with the correct alternative is usually pretty easy.  Coming up with decent distractors is a lot more difficult.  First off, let’s look at what a multiple-choice item is supposed to do.

“It’s supposed to show me what the examinee knows, right?” I hear you say.  True enough.  But if that’s all your item is doing you are losing out on the power of all your distractors.

“I just make up something confusing, add ‘None of the above’, and call it done”.  As my step-son says on occasion “I’m going to have to hurt you now…”

Never, ever, ever use “None of the above” and “All of the above”.  They tell you zip about what your examinee knows.  All they do is give you a lazy way out of of having to think about the item.

An item should not only show you what your examinee knows, it should show you what they don’t know.  I’d actually argue that knowing that is in many ways more important.

So, how do you make a good distractor?  There are a couple of good rules that I learned from my old buddy Stan Trollip.  (He’s a mystery writer now.):

  • all your alternatives should be about the same length
  • make sure your alternatives grammatically match your stem
  • never use “give-away” alternatives just to fill out the item
  • make your alternatives logically possible

What about that last one?  “Logically possible”?  The way I read that is that the alternative would look plausible to a person that didn’t know the correct answer, but obviously wrong to one who did.   It’s probably the most important guideline, and one of the hardest to meet.  There is, however, an easy way to create those “logical” distractors: have your examinees write them for you!

Here’s what you do:

  1. Create an open-ended question that is pretty much identical to the stem of the item you want to use.
  2. Put it in your assessment.
  3. Gather and rank by frequency all the responses your examinees wrote in.
  4. Take the top 4 (for a 5-alternative multiple-choice item) wrong responses and make them your distractors.

You can, of course, create an alternative by synthesizing something from a couple of examinee responses.  In any case, you will end up with something that represents common misconceptions about whatever it is you are asking about.

We came up with this idea back in the late 70′s on the old PLATO system (See: PLATO on Wikipedia and Cyber1).  It worked like a champ back then, and it’ll work now for you.

Process vs. Product

January 7th, 2010

I’ve given a lot of training sessions in my 20+ years of supporting The Examiner.  Some folks are right on top of things, others are trainable, others just seem to spin their wheels and get nowhere fast.  The wheel-spinners all seem to have one main thing in common:  they get so involved in the process of assessment that they forget the product of the assessment is what they are really interested in.

What do I mean by “product of an assessment”?  In my book that’s a couple of things:

  • Finding out what an examinee knows, and what they don’t know.
  • Seeing if the items in the assessment are valid.
  • Seeing if the assessment itself is valid.

I’ll often tell folks that it doesn’t matter how they get to the results.  Heck, they could give tests on an Ouija board if they get valid result.  But it is the results that count.  The problem is that people get so wrapped up with the process of the getting to the results that they spend all their time worrying about that and not what they really need to concern themselves with.

What are some symptoms of “process over product”?

  • Someone is always saying “that’s not the way we’ve done it before”.
  • People are designing test items and test designs before asking themselves what they want to find out.
  • Committees spend all their time selecting a software product and leave little time for assessment development.

Schools (recipients of no-child-left-behind and state mandates) and businesses (we need to certify our employees!) both get hung up on the administrative side of assessments when they should be paying attention to the educational and training side.  Yes, you need to worry about the assessment platform.  But, you shouldn’t even think about that until you have it absolutely clear in your mind what you want to get out of the system.

With that in mind, a few tips:

  • Before you do anything else, write down the goals.  This shouldn’t be “I want the assessment to…” kind of things.  It should be “I want the examinee to know…” or “The student should be able to…” objective statements.
  • Figure out what things go into meeting these goals (yes… do a task analysis!).
  • Determine the best method (not product!) for achieving this.  Is it an assessment?  An on-the-job evaluation?  An interview?
  • Then, and only then, figure out a “thing” for meeting the need.

I’d be remiss if I didn’t suggest that the Examiner Express (or our full Examiner System) would meet these needs.  However, we aren’t the be-all and end-all of the assessment process.  Go through the steps and take a good look at your process.  Only then should you pick the methodology that meets your needs.

Welcome to Examiner Express!

January 4th, 2010

I’m going to be using this blog to discuss:

  • Thoughts on the assessment process.
  • How to use Examiner Express.
  • Tips and tricks.
  • Random ramblings about the universe.

So…  where to start?

Check out the first “real” blog entry on “Product vs Process”