Just-in-time prepared captioning for live transmissions

Matthew N Simpson, Jonathan Barrett, Peter Bell, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Latency remains one of the most significant factors (1) in the audience’s perception of quality in live-originated TV captions for the Deaf and Hard of Hearing.
Once all prepared script material has been shared between the programme production team and the captioners, pre-recorded video content remains a significant challenge – particularly ‘packages’ for transmission as part of a news broadcast. These video clips are usually published just prior to or even during their intended programme – providing little opportunity for thorough preparation.
This paper presents an automated solution based on cutting-edge developments in Automatic Speech Recognition research, the benefits of context-tuned models, and the practical application of Machine Learning across large corpora of data – namely many hours of accurately captioned broadcast news programmes. The challenges in facilitating the collaboration between academic partners, broadcasters and technology suppliers are explored, as are the technical approaches used to create the recognition and punctuation models, the necessary testing and refinement required to transform raw automated transcription into broadcast captions and methodologies for introducing the technology into a live production environment.
Original languageEnglish
Title of host publicationIBC 2016 Conference
Place of PublicationAmsterdam, Netherlands
Number of pages9
ISBN (Electronic)978-1-78561-343-2
Publication statusPublished - 12 Sep 2016
EventIBC 2016 Conference - Amsterdam, Netherlands
Duration: 8 Sep 201612 Sep 2016


ConferenceIBC 2016 Conference
Internet address

Fingerprint Dive into the research topics of 'Just-in-time prepared captioning for live transmissions'. Together they form a unique fingerprint.

Cite this