Description
Lecture recording offers new opportunities for students to interact with material taught in classes, and has been shown to be a versatile learning resource. However, recordings are usually offered as basic 50-minute lectures, with little support to search the content or retrieve information quickly. Learning from such recordings is like learning from a text book with no table of contents, headings, or bookmarks. This is in direct contrast to contemporary media consumption, such as on YouTube, which is typically through short, captioned, focused content, presented as part of a meta-enhanced ‘channel’ containing descriptions, comments, and recommendations to related content.Technological solutions for generating searchable summaries or captioning includes speech transcription, character recognition of slide content, and manual captioning, with the latter being time intensive and expensive. Crowdsourcing solutions are an alternative approach, which allows students to contribute their own summaries, and with users able to review and correct each other’s work [1]. Indeed, commercial solutions combining both techniques are available, such as from Synote (“Synchronised notes”, http://synote.com/).
Beyond captioning, topic-based segmentation of classroom videos has been investigated based on content similarities throughout the video; more pragmatic solutions include automatic segmentation using key-frame templates.
This paper presents a review of the state of the art in automatic and crowdsourced captioning and segmentation of lectures, and how this enables students to access information quickly as well as improving accessibility. We present our results from a recent Principal’s Teaching Award Scheme (PTAS) project considering these issues and the benefits to our students.
Period | 19 Jun 2019 |
---|---|
Event title | University of Edinburgh Learning and Teaching Conference |
Event type | Conference |
Location | Edinburgh, United KingdomShow on map |
Degree of Recognition | Local |