After going through all the effort of developing an online test, you want it to be an accurate measure. That’s why it’s so important to plan for online test reliability.
In Are Your Online Tests Valid?, we examined test validity or how you can be sure a test measures what it claims to measure. Test validity is required before reliability can be considered in any meaningful way. You may want to read the previous article first.
In this article, we’ll look at test reliability. A test with a high degree of reliability will be a more accurate measure of the learner’s knowledge and skills than one with low reliability. If you have trouble keeping all of these terms straight, think of it this way: reliability = consistency.
Test Reliability Is Consistency
Test reliability is an attempt to reduce the random errors that occur in all tests to a minimum. The way to reduce random errors is to make a test consistent. A test that is reliable or consistent has few variations within itself and produces similar results over time. This is often compared to a scale. If you weigh yourself every day and your weight is reasonably consistent, you consider the scale reliable. If the scale displays wildly different weights from day to day (even during the holidays), you would not consider it a reliable measure.
Test reliability answers the question:
TO WHAT DEGREE IS A TEST CONSISTENT IN WHAT IT MEASURES?
What Makes A Test Consistent?
A test that is reliable will have a degree of consistency evidenced by these characteristics:
- The test items seem similar or highly related. The test comes together as one whole.
- There are no great leaps in difficulty, wording and tone. It might seem like one person wrote the entire test.
- If the test were administered to similar groups, you would see similarities in the scores across the groups.
- The test is long enough to assess the learner’s knowledge. Very short tests are more affected by the “luck factor.”
How To Improve Online Test Reliability
- Ensure that the test measures related content. Avoid creating one test for several different courses.
- Ensure that testing conditions are similar for each learner. For example, if your testing software displays well in a particular browser, then make using the best browser a requirement.
- Add more questions to the test. A longer test is going to be more reliable.
- Word test questions very clearly so that no other interpretations are possible.
- Write test instructions so that they are easily understood.
- Make sure the answer choices are clearly different from each other and that distractors (wrong answers) are 100% wrong.
- Create test items of similar difficulty, when possible.
- Test members of the same audience group twice, ideally a month apart. If the distribution of scores are similar, the test is likely to be reliable. If the scores are very different, improve the questions that had a discrepancy. Take into account that scores on the second test may be a a bit higher. (Because of deadlines and budgets, administering two tests is probably unrealistic. Still, we can dream, can’t we?)
Relationship Of Reliability To Validity
A reliable test is not necessarily a valid test. A test can be internally consistent (reliable) but not be an accurate measure of what you claim to be measuring (validity).