Marking Strategies in Metacognition-Evaluated Computer-Based Testing
Graduate Institute of Information and Computer Education, National Taiwan Normal University, Taipei Taiwan // email@example.com
Graduate Institute of Information and Computer Education, National Taiwan Normal University, Taipei Taiwan // firstname.lastname@example.org
Graduate Institute of Information and Computer Education, National Taiwan Normal University, Taipei Taiwan // email@example.com
ABSTRACT: This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results classified as correct answers with and without marking or incorrect answers with and without marking. The study analyzed 454 ninth graders randomly assigned to three groups: Gmm (marking + MEF), Gmu (marking), and Guu (none). Each group was further categorized into three subgroups based on their English ability. Results showed that marking improved medium-ability examinees’ test scores. This was a promising finding because the medium-ability students were the very target group that had the most potential for improvement. Additionally, MEF was found to be beneficial as well in that it encouraged students to use marking skills more frequently and to review answer-explanations of the test items. The follow-up interviews indicated that providing adaptive and detailed AEs for low-ability students were necessary. The present study reveals the potential of integrating marking and adaptive feedbacks into the design of learning functions that are worth implementing in CBT systems.
Keywords: Computer-based testing (CBT), Test-taking behavior, Marking behavior, Metacognition evaluation, Confidence rating technique