Monday, February 15, 2010

Writing Questions, or Stop me before I query again--Please!

I got swamped by work and haven't posted for two weeks (or read much, either). In between work, Jan and I also moved my youngest son home from college and celebrated Valentine's day (I know: whine, whine, whine). Over those two weeks, I did do some some software development but most of my time was taken up with some instructional design.

Actually, with a small part of instructional design: Writing multiple choice and short answer questions.

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAARGGGGGGGGGGGGH!!!

Don't get me wrong: It's not that I don't enjoy crafting multiple choice questions. But after generating three or four hundred questions my brain is starting to go numb. Fortunately, my client got the original deadline moved and I got a break over the weekend (hence, this post). But there's still about 100 questions left to generate and I'll be back at that tomorrow. You may be able to hear the screams.

But all of this work, of course, got me to thinking about effective testing. The first issue to ask of any test is whether the questions address the objectives of the training. It turns out that (at least for me) this is a considerably harder goal to achieve when generating questions after the content is created by someone else.

In the ideal instructional design process, you create your objectives for the training, create the tests that will prove that the participant has achieved those objectives, and (finally) write the content/create the experience that will allow the participant to pass the test.

In real life, I don't do it that way. When developing training material, as I generate the content I get smarter about what I'm writing. As a result, I often go back and modify, extend, or even drop some objectives. Rather than re-do the questions/tests after generating the content, based on the final set of objectives, I do most of the work on the questions/tests at the end of the process. I still do a pretty good job of generating tests in that process.

However, I don't do NEARLY as well when I come in at the end of the process when someone else has generated the content. Part of the problem, in generating questions for someone else's content, I feel obligated to provide questions for content, even if the content isn't tied into one of he objectives. My assumption is that students will feel obliged to study every page in the textbook so it's my obligation to provide a question to justify the strudent's effort. This is wrong-headed of course: just because it's on the page, it doesn't mean that i have to test for it unless it addresses the objectives of the course. I think that I identify too closely with the student.

The nice thing about working for other people is that I get feedback. My client's ultimate customer (the person my client is producing the instructional material for), it turns out, has a style guide. Some of the restrictions that the customer had in their style guide were:
  • No negative questions ("which of the following is not")
  • No "None of the above/All of the above"
  • No questions where the answer finishes the question by providing the end of the sentence
The only one of these dictums that I had a problem adjusting to was the ban on "None/All of the above." My concern with "All/None of the above" are those tests when the only time this answer appears is when it's the right answer (i.e. when "All of the above" is always the right answer). Generally speaking, if I use "All of the above" in a test, I will ensure that the answer appears two or three more times as the wrong answer than as the right answer. Other than that concern, a blanket objection to "None/All of the above" seemed odd to me. But, hey, it's a style guide issue: I doubt that the test takers care one way or another.

I noticed that the customer's review of my questions sometimes fell into what I call the "knowledge fallacy." Often, when reviewing multiple choice questions where we know the answer we critique questions by simultaneously assuming that the student knows the answer and doesn't know the answer--that the student knows as much as we do while still being a student.

For instance, on one question with four potential answers, I had two of the answers in one format (e.g. "verb-noun") and two in a second format (just a noun). The correct answer was in the first group (answer "b"). The reviewer commented that, because the second set of answers ("c" and "d") were in a format different from the correct answer, the student could ignore the second set of answers.

This is true: but only true if you know that answer b is correct. If you don't know which answer is the correct one, the realization that the answers are in two different formats does you no good; if you do know the right answer....well, then, it doesn't matter if the answers fall into two groups.


Last note: In the same way that I think that the most important person on the writing team is a representitive of the audience, I often think that the real test of a multiple choice test would be to have someone completely ignorant of the subject take it. If that person does better than just guessing (e.g. if, on a multiple choice test with 4 answers for every question, the test-taker gets better than 25%) then you have a problem. But that leads to another question: If, under those circumstances, the test-taker does worse than 25% does that mean that you have a really well-designed test or a really unfair one?

Reading or read

No comments: