Writing Distractors Is Not Easy
The purpose of a test is to determine a participant’s mastery of the performance objectives and to find out whether a learning experience was successful. Effective distractors (the incorrect answers in selected-response tests) can play an important role in getting the measurement data you’re seeking. Well-written distractors provide information that no other aspect of the assessment can. But, writing effective distractors is one of the most difficult tasks of learning design. Improving our skills in writing distractors, therefore, is of great importance.
For this article, I’ve scanned the academic literature to present some of the most valuable research findings about, yes, distractors. Distractors should be free from flaws in order to make a test as valid (you are measuring what you intend to measure) and reliable (test consistency) as possible. Although you may be familiar with the guidelines, it’s important to consider the role distractors play in getting the diagnostic information you are seeking.
Why Distractors Are a Valuable Source of Information
The purpose of a distractor is to see if the learner can discriminate between correct and incorrect answer choices. This demonstrates what a participant knows or doesn’t know. It shows whether someone performs a task correctly or incorrectly. Thus, distractors can be a valuable source of information. Effective distractors have the potential to tell us whether:
- a test taker has achieved the performance objective
- the individual has misconceptions, faulty reasoning or will make errors in the “real world”
- the particular test item needs improvement
- a corresponding portion of a learning experience is ineffective
Developing Distractors
Writing good distractors takes a lot of time and effort. Sometimes it can even be frustrating (right?). Here are a few approaches that could speed up your work.
Partner with content experts. Rather than struggle with unfamiliar content, partner with subject matter experts to write distractors whenever possible. Educate them about evaluation and measurement and provide them with good test writing guidelines.
Consider these questions. To develop effective distractors, researchers suggest considering the types of questions below.
- What do people usually confuse this concept with?
- What mistakes do people make when performing this task?
- What is a common error people make when solving this problem?
- What are common misconceptions in this field?
Try ‘Think Aloud’ techniques. Consider asking an expert as well as those with different levels of competence to complete a task while they think out loud. This is one way to get at the expert’s tacit knowledge. This can also help you discover the flawed reasoning and errors that novices make. Use those errors as your distractors.
Use similarity to create attractive distractors. Another approach is to start the distractor writing process by identifying the key feature(s) in the correct answer. Then systematically remove or replace one of the key features and replace with another one to create the distractor.
This seems easier to do in technical and medical content. For example, if there are three indicators of an illness, change one or two of the symptoms. If someone must know how to solve for a mathematical formula, provide a distractor generated by solving for a similar formula.
Distractors Should Be Functional
Academics describe functional distractors as plausible choices that are frequently selected. If the purpose of the test is to provide you with information, then a credible distractor based on common errors and misconceptions will help you understand what an individual or group misunderstands. In contrast, a non-functioning distractor is one that is infrequently selected. An implausible distractor provides no value toward understanding what a learner knows and does not know.
Suppose in a graphic design course, the performance objective requires the participant to use the best graphic format for varied situations. Compare the test items below—one with non-functional distractors and one with functional distractors.
Example of Non-functional Distractors
You need a graphic with a transparent background that supports 24-bit color for a slide you are designing. What graphic format will you select?
a. PNG (correct)
b. MP3 (distractor)
c. MP4 (distractor)
The distractors in this test item are non-functional because they prove whether a test taker can discriminate between a graphic file (choice a.), an audio file (choice b.) or a video file (choice c.). They will not provide feedback as to whether a person achieved the performance objective and/or whether the learning experience was effective. In addition, those incorrect choices will not be frequently selected because they are not credible choices.
Example of Functional Distractors
You need a graphic with a transparent background that supports 24-bit color for a slide you are designing. What graphic format will you select?
a. PNG (correct)
b. JPG (distractor)
c. GIF (distractor)
The distractors in this test item are functional because they prove whether a test taker can discriminate between three types of graphic formats. Only one, the PNG format, supports both 24-bit color and transparency. These distractors will provide diagnostic feedback as to the type of misconception a learner may have.
Distractors Should Not Trick
There is full agreement that a distractor should not attempt to trick a learner. That will only provide poor information to the person writing the assessment. But, if the distractor is based on common errors and faulty reasoning, it will be attractive (a term some researchers use).
Is there a blurry line between tricking a learner and attracting a learner? Perhaps in many cases it is only the intent that differs. If you find this line blurry, think, “What distractors will provide the information I’m looking for.”
Three Choices Are Usually Sufficient
In a comprehensive meta-analysis on multiple-choice testing by Michael Rodriguez (2005), he concluded that three answer choices are optimal, rather than four or five. He found no loss in item quality by reducing the number of items from five to three. That analysis also found a slight improvement in test score reliability when the number of choices was reduced from four to three. (Listen to my interview with Dr. Rodriguez. One of my favorites!)
Here was his rationale.
- It is difficult to write test items with functional distractors. Adding non-functional distractors weakens the test; removing non-functional distractors can strengthen the effectiveness of a test.
- It is less challenging and takes less time to develop three good answer choices (one correct and two incorrect).
- Having more incorrect choices serves no purpose if test takers do not select them.
- Test takers can move through a test more quickly with three options, allowing for more questions.
- When a non-functional distractor is removed, it may increase the item difficulty.
In a later project, Haladyna and Rodriguez (2013) reviewed an additional decade of research and offered a revised guideline: “Use only options that are plausible and discriminating; three options are usually sufficient.” Put another way, every distractor should be justified. Surprisingly, in my interview with Dr. Rodriguez, he states there is no harmful effect to varying the number of answer choices within one test.
Similar Distractors Are the Strongest
Should distractors be similar in structure to the correct answer? Many studies—typically on university students or younger—found that answer choices with distractors that were similar to the correct answer increased the difficulty of the test item. It’s logical to assume that this would happen with any age group. That is, it is more difficult for test takers to discriminate between items that are parallel than items that are dissimilar. The researchers reason that answer choices that are dissimilar from the correct answer may be easier to eliminate as implausible. This suggests that “test performance could result from test-wiseness rather than knowledge of the test material,” (Ascalon, et. al., 2007).
Content similarity can be based on many different aspects of a distractor, including:
- phrasing
- content (similar concept or idea)
- structure (length, complexity, formatting and grammar)
- similarity in numeric options (for calculations)
Perform a Distractor Analysis
Researchers recommend conducting a distractor analysis as a way to improve tests. In workplace learning, this may only be necessary for high-stakes tests dealing with critical knowledge and skills, such as in safety, medical and security topics. For a basic analysis, examine the percentage of learners that choose each distractor. When you find a low-frequency distractor, consider removing it from the test item or improving it. This type of analysis improves discrimination between answer choices (Gierl, Bulut, Guo, and Zhang, 2017).
A distractor analysis won’t work for those low-stake compliance tests often thrown together at the end of an eLearning course. If a test is very easy, the distractors won’t be selected with great frequency. Another way to use the data from a distractor analysis, in my opinion, is to close the feedback loop and help you understand why learners produce errors and how to improve a learning intervention.
Related Articles about Testing
- Writing Multiple Choice Questions for Higher-order Thinking
- 10 Rules for Writing Multiple Choice Questions
- Are Your Online Tests Reliable?
- Are Your Online Tests Valid?
- How to Plan, Write and Design Tests: An interview with a professor of psychometrics.
References:
- Ascalon, M.E., Meyers, L.S., Davis, B.W. & N. Smits (2007). Distractor Similarity and Item-Stem Structure: Effects on Item Difficulty, Applied Measurement in Education, 20:2, 153-170.
- Gierl, M. J., Bulut, O., Guo, Q., & Zhang, X. (2017). Developing, Analyzing, and Using Distractors for Multiple-Choice Tests in Education: A Comprehensive Review. Review of Educational Research, 87: 6, pp. 1082 –1116.
- Guttman, L. & L.M. Schlesinger (1967). Systematic Construction of Distractors for Ability and Achievement Test Items, Educational and Psychological Measurement, 27: 569-580.
- Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. New York, NY: Routledge.
- Parkes, Jay, and Dawn Zimmaro. Learning and Assessing with Multiple-Choice Questions in College Classrooms, Taylor & Francis Group, 2016.
- Rodriguez, M. (2005). Three Options Are Optimal for Multiple‐Choice Items: A Meta‐Analysis of 80 Years of Research, Educational Measurement: Issues and Practice, 24:2.
Connie Malamed says
Thanks Patty. I’m hoping to write a book about this at some point. If you are into this topic, I have a few more resources for you. One is this interview with psychometrician and professor, Michael Rodriguez: How to Plan, Write and Design Tests. You can download the transcript or listen. One of my favorite interviews. Also, check out Patti Shank. She gives a course on writing multiple-choice questions and I’m sure it’s excellent as she teaches evidence-based practices.
.
Patty says
Thanks for this very useful article! It’s the only summary of research on multiple-choice questions I could find on the web. I searched extensively without luck before linking to this page from your email.
Lenz Bayas says
Very useful info, thanks Connie for putting it all together so succinctly!
Connie Malamed says
Thank you for your kind words, Ann. I don’t know why, but I love learning about tests and assessments. It’s the nerd in me I guess 🙂
Ann says
Connie, thank you so much for your work sharing evidence-based content on eLearning. I value your work expertise.
Connie Malamed says
Thanks Joan. Glad you found it valuable. It’s pretty fascinating stuff to me.
Ed says
Thanks much! I’ve written a lot of distractors in my time but definitely picked up some new knowledge from this article.
Wolf says
As always: helpful and to the point.
Thanks a lot.
Wolf
Joan DeSoto says
I like alot of these ideas toward improving the quality of distractors and the overall validity of the test.
Connie Malamed says
I’m glad this was valuable Gwen.
Connie Malamed says
I’m glad this was helpful, Ben.
Ben Butina says
Outstanding article, Connie! This will be my new go-to resource for junior designers on this topic.
Gwen says
Thanks so much for posting this, Connie. Love that it is research based. Very informative!