The Predictive Power of Eye-Tracking Data in an Interactive AR Learning Environment
David Dzsotjan, Kim Ludwig-Petsch, Sergey Mukhametov, Shoya Ishimaru, Stefan Küchemann and Jochen Kuhn.
Learning through embodiment is a promising concept, potentially capable to remove many layers of abstraction hindering the learning process. Walk the Graph, our HoloLens2-based AR application, provides an inquiry-based learning setting for understanding graphs through the full-body movement of the user. In this paper, as part of our ongoing work to build an AI framework to quantify and predict the learning gain of the user, we examine the predictive potential of gaze data collected during the app usage. To classify users into groups with different learning gains, we construct a map of areas of interest (AOI) based on the gaze data itself. Subsequently, using a sliding window approach, we extract engineered features from the collected in-app as well as gaze data. Our experimental results have shown that a Support Vector Machine with selected features achieved the highest F1 score (0.658; baseline: 0.251) compared to other approaches including a K-Nearest Neighbor and a Random Forest Classifier although in each of the cases the lion's share of the predictive power is indeed provided by the gaze-based features.
Effects of Counting Seconds in the Mind while Reading
Pramod Vadiraja, Jayasankar Santhosh, Hanane Moulay, Andreas Dengel and Shoya Ishimaru.
In cognitive psychology, attention and distraction are two phenomena that do not always harmonize well with each other. Nowadays, with the vast amount of information potentially available to us, it has become challenging to avoid being distracted and remain attentive when involved in an activity. In this work, we describe a way to control distraction during reading activities. We start with an experiment to measure participants' reading behaviors, which led to further analysis of how distractions affect readers' capabilities. We follow this with an attempt to statistically model the cognitive states using the data from our experiment. Finally, we propose two cognitive state recognition approaches (interest and distraction) in with and without distractor conditions.