Can a Composite Metacognitive Judgment Accuracy Score Successfully Capture Performance Variance during Multimedia Learning?
- Megan Wiedbusch, Learning Sciences and Educational Research, University of Central Florida, Orlando, Florida, United States
- Roger Azevedo, Learning Sciences and Educational Research, University of Central Florida, Orlando, Florida, United States
- Michael Brown, Department of Computer Science, University of Central Flordia, Orlando, Florida, United States
AbstractTheoretical models of self-regulated learning highlight the importance and dynamic nature of metacognitive monitoring and regulation. However, traditional research typically has not examined how different judgments, or the relative timing of those judgments, influence each other, especially in complex learning environments. We compared six statistical models of performance of undergraduates (n = 55) learning in MetaTutor-IVH, a multimedia learning environment. Three types of prompted metacognitive judgments (ease of learning [EOL] judgments, content evaluations [CEs], and retrospective confidence judgments [RCJs]) were used as individual predictors, and combined in a uniformly-weighted composite score and empirically based weighted composite score across the learning session. The uniformly weighted composite score better captured performance than the models using only an EOL judgment or RCJ judgment. However, the empirically weighted composite model outperformed all other models. Our results suggest that metacognitive judgments should not be considered as independent phenomenon but as an intricate and interconnected process.