Enhancing VVC with deep learning based multi-frame post-processing

Duolikun Danier, Chen Feng, Fan Zhang, David Bull

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

This paper describes a CNN-based multi-frame post-processing approach based on a perceptually-inspired Generative Adversarial Network architecture, CVEGAN. This method has been integrated with the Versatile Video Coding Test Model (VTM) 15.2 to enhance the visual quality of the final reconstructed content. The evaluation results on the CLIC 2022 validation sequences show consistent coding gains over the original VVC VTM at the same bitrates when assessed by PSNR. The integrated codec has been submitted to the Challenge on Learned Image Compression (CLIC) 2022 (video track), and the team name associated with this submission is BVI_VC.
Original languageEnglish
Pages1-4
Number of pages4
DOIs
Publication statusPublished - 19 Jun 2022
Event5th Workshop and Challenge on Learned Image Compression - Online
Duration: 19 Jun 202219 Jun 2022

Workshop

Workshop5th Workshop and Challenge on Learned Image Compression
Period19/06/2219/06/22

Fingerprint

Dive into the research topics of 'Enhancing VVC with deep learning based multi-frame post-processing'. Together they form a unique fingerprint.

Cite this