Buying into the mythos that going back to change test answers will lower test scores, many students decide not to revise any test answers they’re unsure about. Personal experience would have it that many people rely on that ‘gut feeling’ when it comes to taking tests. Whether or not that ideology is helpful is largely up for debate. Based on common sense, it would seem beneficial to go back and change an answer if a student is confident they made a mistake. Statistics can begin to highlight if there exists an advantageous strategy to answer revision at face value. In essence,
Success Rate for Changing Answers
McMorris et al. (1987), in a study involving master’s level students in educational and psychological measurement classes, found the majority of test answer changes were from wrong-to-right. This doesn’t mean that changing the answer, on its own, will likely make the new answer correct. On the questionnaire given at the end of the exams, students answered ‘rethought’ and ‘rereading’ as their prevailing reasons for changing answers (McMorris et al. 1987). This adds to the notion of, so long as changing a test answer is informed by logical processes like ‘rethought’ or ‘rereading’, it may be in the student’s favor to change their answer. There still remains the possibility that a student can simply be overthinking a question upon rereading, which could lead to a wrong answer upon changing it. This study can’t definitively match up people’s reasons for answer-changing with their success rate. Statistics alone can’t account for the many ways students choose to answer test questions. Looking at just the correlation, it remains unclear if intelligence and creativity are associated with success rates for answer changes. With the ability to think critically and in novel ways, perhaps more intelligent and creative people would be more successful in changing their test answers (Hudson et al. 2019).
Trusting First Instincts and Confidence
Research has identified a ‘first-instinct fallacy’ which comes from people placing too much trust in their first instincts (Couchman et al. 2015). One reason for this may be belief bias stemming from many folk beliefs telling students to ‘trust their gut’, which interferes with the possibility of answer revision being the right choice (Hudson et al. 2019). Other reasons for the first-instinct fallacy have to do with how students remember their test-taking experiences. Students place more emphasis on times when an answer was changed from right-to-wrong. This is accounted for by the negativity bias, which predisposes people to better remember negative experiences. As a result, times in which a student incorrectly changed their answer are recalled more easily and perceived as more common according to the availability heuristic (Tversky et al. 1973). In a study assessing students’ confidence with their answer-changing decisions, general confidence in the correctness of an answer was positively correlated with correctness. Stemming off of this result, it would make sense to revise low-confidence answers while not revising high-confidence answers (Couchman et al. 2015).
Many students may already employ this strategy of only revising low-confidence answers. This factors into prospect theory, which involves risk-assessment during decision making. When a student is highly confident in their answer, there’s a greater perceived loss to changing the answer, and risk is averted by sticking with the original answer. Likewise, when confidence is low, there seems to be minimal loss and more to gain from switching (Hudson et al. 2019). To draw a more complete picture of these kinds of decisions, confidence needs to be broken down further. The motivators behind students’ confidence in their answers is relevant in determining whether answer-switching is appropriate. Under and overconfidence could both prevent students from getting the correct answer.
Analyzing Confidence During Tests
Shynkaruk et al. (2006) found test-takers based their confidence mainly in their beliefs and prior knowledge. Based on this relationship, problems that people had more background knowledge about were rated with higher confidence. People were also more confident in their answers when given more time to analyze the problem. When some participants used extra time to reexamine problems and switch their responses, there was a small overall increase in accuracy.
Despite people’s increased confidence in areas of greater prior knowledge, Shynkaruk et al. (2006) found a poor correlation between confidence and accuracy in reasoning. This bears a stark contrast to Couchman et al. (2015), which found a positive correlation between confidence levels and accuracy. This isn’t to say that confidence can’t be a valuable tool for deciding whether to change an answer on an exam. These contrasting results bring up the issue of whether confidence alone should be the primary measure when deciding to revisit a question.
The weak correlation between overall confidence and accuracy may be because of other variables that affect each differently (Busey et. al 2000). According to context-dependent memory, matching study and testing conditions may improve recall accuracy (Hudson et al. 2019). However, matching context may not affect confidence on its own, unless the student is deliberate in doing it. This under confidence would make it disadvantageous for a student to go back and change a potentially correct. The amount of time for which students study prior to testing can work oppositely by boosting confidence while not affecting accuracy depending on the study method. A student may feel more confident after employing rote memorization techniques for hours, but that actual memory improvement could be insignificant (Hudson et al. 2019). At this point, overconfidence can prevent a student from making the right choice.
One of the problems presented by criticizing confidence measures is that there aren’t many other metrics a student can use during a test. Personal experience would say confidence is the only metric a student can really use to gauge performance in the middle of a test. What’s required is metacognition delving deeper than simply, ‘Am I confident in my answer?’
Students can be more specific and honest with where their confidence, or lack thereof, comes from. By being truly aware of their understanding in a subject or specific question, they can better determine whether it’s worth it to change an answer. Statistics alone seem to advocate for a ‘revise-more-often’ mentality. However, this draws an incomplete picture of what motivates revision in the first place. Further research can illuminate the underlying mechanisms of students’ choices to change answers during tests. At the same time, on a student-to-student basis, it would be wise to study effectively and efficiently in the hopes that such a difficult choice doesn’t happen too often.
- Busey, T. A., Tunnicliff, J., Loftus, G. R., & Loftus, E. F. (2000). Accounts of the confidence-accuracy relation in recognition memory. Psychonomic Bulletin & Review, 7(1), 26–48. doi: 10.3758/bf03210724
- Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2015). The instinct fallacy: the metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171–185. doi: 10.1007/s11409-015-9140-8
- Hudson, D. L., & Whisenhunt, B. L. (2019). Psychology.
- Mcmorris, R. F., Demers, L. P., & Schwarz, S. P. (1987). Attitudes, Behaviors, and Reasons for Changing Responses Following Answer-Changing Instruction. Journal of Educational Measurement, 24(2), 131–143. doi: 10.1111/j.1745-3984.1987.tb00269.x
- Shynkaruk, J. M., & Thompson, V. A. (2006). Confidence and accuracy in deductive reasoning. Memory & Cognition, 34(3), 619–632. doi: 10.3758/bf03193584
- Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232. doi: 10.1016/0010-0285(73)90033-9