Abstract / Description of output
Despite the development of user-friendly interfaces for modeling garments and putting them onto characters, preparing a character dressed in multiple layers of garments can be very time-consuming and tedious. In this paper, we propose a novel scanning-based solution for modeling and animating characters wearing multiple layers of clothes. This is achieved by making use of real clothes and human bodies. We first scan the naked body of a subject by an RGBD camera, and a statistical body model is fit to the scanned data. This results in a skinned articulated model of the subject. The subject is then asked to put on one piece of garment after another, and the articulated body model dressed up to the previous step is fit to the newly scanned data. The new garment is segmented in a semi-automatic fashion and added as an additional layer to the multi-layer garment model. During runtime, the skinned character is controlled based on the motion capture data and the multi-layer garment model is controlled by blending the movements computed by physical simulation and linear blend skinning, such that the cloth preserves its shape while it shows realistic physical motion. We present results where the character is wearing multiple layers of garments including a shirt, coat and a skirt. Our framework can be useful for preparing and animating dressed characters for computer games and films.
Original language | English |
---|---|
Pages (from-to) | 1–9 |
Number of pages | 9 |
Journal | The Visual Computer |
Volume | 33 |
DOIs | |
Publication status | Published - 9 May 2017 |
Event | Computer Graphics International 2017 - Keio University, Yokohama, Japan Duration: 27 Jun 2017 → 30 Jun 2017 http://fj.ics.keio.ac.jp/cgi17/ |
Keywords / Materials (for Non-textual outputs)
- Cloth animation
- 3D scanning
- Dressed character