Abstract / Description of output
Learning meaningful representations for chirographic drawing data such as sketches, handwriting, and flowcharts is a gateway for understanding and emulating human creative expression. Despite being inherently continuous-time data, existing works have treated these as discrete-time sequences, disregarding their true nature. In this work, we model such data as continuous-time functions and learn compact representations by virtue of Neural Ordinary Differential Equations. To this end, we introduce the first continuous-time Seq2Seq model and demonstrate some remarkable properties that set it apart from traditional discrete-time analogues. We also provide solutions for some practical challenges for such models, including introducing a family of parameterized ODE dynamics & continuous-time data augmentation particularly suitable for the task. Our models are validated on several datasets including VectorMNIST, DiDi and Quick, Draw!.
Original language | English |
---|---|
Title of host publication | International Conference on Learning Representations (ICLR 2022) |
Number of pages | 16 |
Publication status | Published - 25 Apr 2022 |
Event | Tenth International Conference on Learning Representations 2022 - Virtual Conference Duration: 25 Apr 2022 → 29 Apr 2022 Conference number: 10 https://iclr.cc/ |
Conference
Conference | Tenth International Conference on Learning Representations 2022 |
---|---|
Abbreviated title | ICLR 2022 |
Period | 25/04/22 → 29/04/22 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- Chirography
- Sketch
- Free-form
- Neural ODE