Identifying student struggle by analyzing facial expressions during asynchronous video lecture viewing: Towards an automated tool to support instructors

Adam Linson, Yucheng Xu, Andrea R English, Robert B Fisher

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The widespread shift in higher education (HE) from in-person instruction to pre-recorded video lectures means that many instructors have lost access to real-time student feedback for the duration of any given lecture (a ‘sea of faces’ that express struggle, comprehension, etc.). We hypothesized that this feedback could be partially restored by analyzing student facial movement data gathered during recorded lecture viewing and visualizing it on a common lecture timeline. Our approach builds on computer vision research on engagement and affect in facial expression, and education research on student struggle. Here, we focus on individual student struggle (the effortful attempt to grasp new concepts and ideas) and its group-level visualization as student feedback to support human instructors. Research suggests that instructor supported student struggle can help students develop conceptual understanding, while unsupported struggle can lead to disengagement. Studies of online learning in higher education found that when students struggle with recorded video lecture content, questions and confusion often remain unreported and thus unsupported by instructors. In a pilot study, we sought to identify group-level student struggle by analyzing individual student facial movement during asynchronous video lecture viewing and mapping cohort data to annotated lecture segments (e.g. when a new concept is introduced). We gathered real-time webcam data of 10 student participants and their self-paced intermittent click feedback on personal struggle state, along with retrospective self-reports. We analyzed participant video with computer vision techniques to identify facial movement and correlated the data with independent human observer inferences about struggle-related states. We plotted all participants’ data (computer vision analysis, self-report, observer annotation) along the lecture timeline. The visualization exposed group-level struggle patterns in relation to lecture content, which could help instructors identify content areas where students need additional support, e.g. through student-centered interventions or lecture revisions.
Original languageEnglish
Title of host publicationArtificial Intelligence in Education: 23rd International Conference, AIED 2022, Durham, UK, July 27–31, 2022, Proceedings, Part I
EditorsMaria Mercedes Rodrigo, Noburu Matsuda, Alexandra I Cristea, Vania Dimitrova
PublisherSpringer, Cham
Pages53-65
Number of pages12
ISBN (Electronic)978-3-031-11644-5
ISBN (Print)978-3-031-11643-8
DOIs
Publication statusPublished - 27 Jul 2022
EventThe 23rd International Conference on Artificial Intelligence in Education 2022 - Durham, United Kingdom
Duration: 27 Jul 202231 Jul 2022
Conference number: 23
https://aied2022.webspace.durham.ac.uk/

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Cham
Volume13355
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceThe 23rd International Conference on Artificial Intelligence in Education 2022
Abbreviated titleAIED 2022
Country/TerritoryUnited Kingdom
CityDurham
Period27/07/2231/07/22
Internet address

Keywords / Materials (for Non-textual outputs)

  • video analysis
  • data visualization
  • facial expression
  • student struggle
  • reflective teaching
  • human-centered computing

Fingerprint

Dive into the research topics of 'Identifying student struggle by analyzing facial expressions during asynchronous video lecture viewing: Towards an automated tool to support instructors'. Together they form a unique fingerprint.

Cite this