Exploring Data Augmentation for Code Generation Tasks

Pinzhen Chen, Gerasimos Lampouras

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Advances in natural language processing, such as transfer learning from pre-trained language models, have impacted how models are trained for programming language tasks too. Previous research primarily explored code pre-training and expanded it through multi-modality and multi-tasking, yet the data for downstream tasks remain modest in size. Focusing on data utilization for downstream tasks, we propose and adapt augmentation methods that yield consistent improvements in code translation and summarization by up to 6.9% and 7.5% respectively. Further analysis suggests that our methods work orthogonally and show benefits in output code style and numeric consistency. We also discuss test data imperfections.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EACL 2023
EditorsAndreas Vlachos, Isabelle Augenstein
Place of PublicationDubrovnik, Croatia
PublisherAssociation for Computational Linguistics
Pages1542-1550
Number of pages9
ISBN (Electronic)978-1-959429-47-0
DOIs
Publication statusPublished - 1 May 2023
EventThe 17th Conference of the European Chapter of the Association for Computational Linguistics - Valamar Lacroma, Dubrovnik, Croatia
Duration: 2 May 20236 May 2023
Conference number: 17
https://2023.eacl.org/

Conference

ConferenceThe 17th Conference of the European Chapter of the Association for Computational Linguistics
Abbreviated titleEACL 2023
Country/TerritoryCroatia
CityDubrovnik
Period2/05/236/05/23
Internet address

Fingerprint

Dive into the research topics of 'Exploring Data Augmentation for Code Generation Tasks'. Together they form a unique fingerprint.

Cite this