Comparing Two Subjective Rating Scales Assessing Cognitive Load During Technology-Enhanced STEM Laboratory Courses

  • Cognitive Load Theory is considered universally applicable to all kinds of learning scenarios. However, instead of a universal method for measuring cognitive load that suits different learning contexts or target groups, there is a great variety of assessment approaches. Particularly common are subjective rating scales, which even allow for measuring the three assumed types of cognitive load in a differentiated way. Although these scales have been proven to be effective for various learning tasks, they might not be an optimal fit for the learning demands of specific complex environments like technology-enhanced STEM laboratory courses. The aim of this research was therefore to examine and compare existing rating scales in terms of validity for this learning context and to identify options for adaptation, if necessary. For the present study, the two most common subjective rating scales that are known to differentiate between load types (the Cognitive Load Scale by Leppink et al. and the Naïve Rating Scale by Klepsch et al.) were slightly adapted to the context of learning through structured hands-on experimentation where elements like measurement data, experimental setups, and experimental tasks affect knowledge acquisition. N=95 engineering students performed six experiments examining basic electric circuits where they had to explore fundamental relationships between physical quantities based on observed data. Immediately after experimentation, students answered both adapted scales. Various indicators of validity, which considered the scales’ internal structure and their relation to variables like group allocation as participants were randomly assigned to two conditions with contrasting spatial arrangement of measurement data, were analyzed. For the given data set, the intended three-factorial structure could not be confirmed and most of the a priori defined subscales showed insufficient internal consistency. A multitrait-multimethod analysis suggests convergent and discriminant evidence between the scales which could not be confirmed sufficiently. The two contrasted experimental conditions were expected to result in different ratings for extraneous load, which was solely detected by one adapted scale. As a further step, two new scales were assembled based on the overall item pool and the given data set. They revealed a three-factorial structure in accordance with the three types of load and seem to be promising new tools, although their subscales for extraneous load still suffer from low reliability scores.

Download full text files

Export metadata

Metadaten
Author:Michael TheesORCiD, Sebastian KappORCiD, Kristin AltmeyerORCiD, Sarah MaloneORCiD, Roland BrünkenORCiD, Jochen KuhnORCiD
URN:urn:nbn:de:hbz:386-kluedo-64855
ISSN:2504-284X
Parent Title (English):Frontiers in Education
Publisher:Frontiers Media SA
Place of publication:Lausanne, Switzerland
Document Type:Article
Language of publication:English
Date of Publication (online):2021/07/14
Year of first Publication:2021
Publishing Institution:Technische Universität Kaiserslautern
Date of the Publication (Server):2021/07/23
Tag:STEM laboratories; cognitive load; differential measurement; multitrait–multimethod analysis; rating scale; split-attention effect; validity
Issue:6, 14 July 2021
Page Number:16
Source:https://doi.org/10.3389/feduc.2021.705551
Faculties / Organisational entities:Kaiserslautern - Fachbereich Physik
DDC-Cassification:1 Philosophie und Psychologie / 150 Psychologie
5 Naturwissenschaften und Mathematik / 500 Naturwissenschaften
Collections:Open-Access-Publikationsfonds
Licence (German):Zweitveröffentlichung