Visual, Laughter, Applause and Spoken Expression Features for Predicting Engagement Within TED Talks

Fasih Haider, Fahim A. Salim, Saturnino Luz, Carl Vogel, Owen Conlan, Nick Campbell

Research output: Contribution to conferencePaper

Abstract / Description of output

There is an enormous amount of audio-visual content available on-line in the form of talks and presentations. The prospective users of the content face difficulties in finding the right content for them. However, automatic detection of interesting (engaging vs. non-engaging) content can help users to find he videos according to their preferences. It can also be helpful for a recommendation and personalised video segmentation system. This paper presents a study of engagement based on TED talks (1338 videos) which are rated by on-line viewers (users). It proposes novel models to predict the user’s (on-line viewers) engagement using high-level visual features (camera angles), the audience’s laughter and applause, and the presenter’s speech expressions. The results show that these features contribute towards the prediction of user engagement in these talks. However, finding the engaging speech expressions can also help a system in making summaries of TED Talks (video summarization) and creating feedback to presenters about their speech expressions during talks.
Original languageEnglish
Pages2381-2385
Number of pages5
DOIs
Publication statusPublished - 2017
EventInterspeech 2017 - Stockholm, Sweden
Duration: 20 Aug 201724 Aug 2017
http://www.interspeech2017.org/

Conference

ConferenceInterspeech 2017
Country/TerritorySweden
CityStockholm
Period20/08/1724/08/17
Internet address

Fingerprint

Dive into the research topics of 'Visual, Laughter, Applause and Spoken Expression Features for Predicting Engagement Within TED Talks'. Together they form a unique fingerprint.

Cite this