Meta Avatar: Learning Animatable Clothed Human Models from Few Depth Images
In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have enabled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense fullbody scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling posedependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to persubject, clothtype optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using metalearning to learn an initialization of a hypernetwork that predicts the parame
|
|