TY - JOUR
T1 - CloTH-VTON+
T2 - Clothing Three-Dimensional Reconstruction for Hybrid Image-Based Virtual Try-ON
AU - Minar, Matiur Rahman
AU - Tuan, Thai Thanh
AU - Ahn, Heejune
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Image-based virtual try-on (VTON) systems based on deep learning have attracted research and commercial interests. Although they show their strengths in blending the person and try-on clothing image and synthesizing the dis-occluded regions, their results for complex-posed persons are often unsatisfactory due to the limitations in their geometry deformation and texture-preserving capacity. To address these challenges, we propose CloTH-VTON+ for seamlessly integrating the image-based deep learning methods and the strength of the 3D model in shape deformation. Specifically, a fully automatic pipeline is developed for 3D clothing model reconstruction and deformation using a reference human model: first, the try-on clothing is matched to the target clothing regions in the simple shaped reference human model, and then the 3D clothing model is reconstructed. The reconstructed 3D clothing model can generate a very natural pose and shape transfer, retaining the textures of clothes. A clothing refinement network further refines the alignment, eliminating the misalignment due to the errors in human pose estimation and 3D deformation. The deformed clothing images are combined utilizing conditional generative networks to in-paint the dis-occluded areas and blend them all. Experiments on an existing benchmark dataset demonstrate that CloTH-VTON+ generates higher quality results in comparison to the state-of-the-art VTON systems and CloTH-VTON. CloTH-VTON+ can be incorporated into extended applications such as multi-pose guided and Video VTON.
AB - Image-based virtual try-on (VTON) systems based on deep learning have attracted research and commercial interests. Although they show their strengths in blending the person and try-on clothing image and synthesizing the dis-occluded regions, their results for complex-posed persons are often unsatisfactory due to the limitations in their geometry deformation and texture-preserving capacity. To address these challenges, we propose CloTH-VTON+ for seamlessly integrating the image-based deep learning methods and the strength of the 3D model in shape deformation. Specifically, a fully automatic pipeline is developed for 3D clothing model reconstruction and deformation using a reference human model: first, the try-on clothing is matched to the target clothing regions in the simple shaped reference human model, and then the 3D clothing model is reconstructed. The reconstructed 3D clothing model can generate a very natural pose and shape transfer, retaining the textures of clothes. A clothing refinement network further refines the alignment, eliminating the misalignment due to the errors in human pose estimation and 3D deformation. The deformed clothing images are combined utilizing conditional generative networks to in-paint the dis-occluded areas and blend them all. Experiments on an existing benchmark dataset demonstrate that CloTH-VTON+ generates higher quality results in comparison to the state-of-the-art VTON systems and CloTH-VTON. CloTH-VTON+ can be incorporated into extended applications such as multi-pose guided and Video VTON.
KW - 3D clothing reconstruction
KW - Online fashion
KW - generative network model
KW - hybrid approach
KW - virtual try-on
UR - https://www.scopus.com/pages/publications/85101261250
U2 - 10.1109/ACCESS.2021.3059701
DO - 10.1109/ACCESS.2021.3059701
M3 - Article
AN - SCOPUS:85101261250
SN - 2169-3536
VL - 9
SP - 30960
EP - 30978
JO - IEEE Access
JF - IEEE Access
M1 - 9354778
ER -