Character Motion Synthesis: A Survey

Authors

  • Jingtong Xu

DOI:

https://doi.org/10.54097/rc042447

Keywords:

Motion Synthesis, Survey, motion generation.

Abstract

Character Motion Synthesis is a huge cost when it comes to film, game, and design productions using traditional methods of motion synthesis. However, there are newly designed ways to generate motions, which are efficient and economical compared to traditional methods. This paper presents an overview of different types of novel methods to synthesize motions. This paper will introduce Audio-Driven and Music-Driven Motion Synthesis, Generative Models and Frameworks for Motion Synthesis, Human and Object Interaction. Character and Pet Motion Synthesis, Grasp and Hand Object Interactions, and Motion Retargeting and Editing, assembling and summarizing separate articles.

Downloads

Download data is not yet available.

References

[1] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter. Listen, denoise, action! Audio-driven motion synthesis with diffusion models [J]. ACM Trans. Graph., Vol. 42, No. 4

[2] Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. Music2dance: Dancenet for music-driven dance generation. ACM Trans. Multimedia Comput. Commun. Appl., 2022. 3, 6

[3] Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. 2023. MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis. In Computer Vision and Pattern Recognition (CVPR).

[4] PEIZHUO LI, KFIR ABERMAN, ZIHAN ZHANG, RANA HANOCKA, OLGA SORKINE-HORNUNG. GANimator: Neural Motion Synthesis from a Single Sequence [J]. ACM Trans. Graph., Vol. 41, No. 4.

[5] Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems (NeurIPS ’21). 8780–8794. https://proceedings. neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf

[6] GUSTAV EJE HENTER∗, SIMON ALEXANDERSON∗, JONAS BESKOW. MoGlow: Probabilistic and Controllable Motion Synthesis Using Normalising Flows [J]. ACM Trans. Graph., Vol. 39, No. 4.

[7] Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. 2020. Style-controllable speech-driven gesture synthesis using normalising flows. Comput. Graph. Forum 39, 2 (2020), 487–496. https://doi.org/10.1111/cgf.13946

[8] Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt, Philipp Slusallek. IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions [J].

[9] Guo, C.; Zuo, X.; Wang, S.; Zou, S.; Sun, Q.; Deng, A.; Gong, M.; and Cheng, L. 2020. Action2motion: Conditioned generation of 3d human motions. In Proceedings of the 28th ACM International Conference on Multimedia, 2021–2029.

[10] Petrovich, M.; Black, M. J.; and Varol, G. 2021. Actionconditioned 3d human motion synthesis with transformer vae. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10985–10995.

[11] He Zhang, Yuting Ye, Takaaki Shiratori, and Taku Komura. 2021. Manipnet: neural manipulation synthesis with a hand-object spatial representation. ACM Transactions on Graphics (ToG) 40, 4 (2021), 1–14.

[12] Daniel Holden∗, Jun Saito† t, Taku Komura‡, A Deep Learning Framework for Character Motion Synthesis and Editing [J]. ACM Trans. Graph., Vol. 35, No. 4.

[13] Noshaba Cheema, Perttu Hämäläinen, Vladislav Golyanik, Nam Hee Kim, Marc Habermann, Christian Theobalt , Philipp Slusallek. Discovering Fatigued Movements for Virtual Character Animation [J]. SA Conference Papers ’23, December 12–15, 2023, Sydney, NSW, Australia.

[14] JIAMAN LI, JIAJUN WU†, C. KAREN LIU†. Object Motion Guided Human Motion Synthesis [J]. ACM Trans. Graph., Vol. 42, No. 6.

[15] Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2022. FLAME: Free-Form Language Based Motion Synthesis & Editing. arXiv preprint arXiv:2209.00349 (2022). https: //arxiv.org/abs/2209.00349

[16] HAIMIN LUO, TENG XU, YUHENG JIANG, CHENGLIN ZHOU, QIWEI QIU, YINGLIANG ZHANG, WEI YANG, JINGYI YU∗. Artemis: Articulated Neural Pets with Appearance and Motion Synthesis [J]. ACM Trans. Graph., Vol. 41, No. 4.

[17] Daniel Holden∗, Jun Saito† t, Taku Komura‡, A Deep Learning Framework for Character Motion Synthesis and Editing [J]. ACM Trans. Graph., Vol. 35, No. 4.

[18] Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, and Louis-Philippe Morency. 2020b. Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional Mixture Approach. In Proceedings of the European Conference on Computer Vision (ECCCV ’20). 248–265. https://doi.org/10.1007/978-3-030-58523-5_15

[19] LI, YING, FU, JIAXIN L, and POLLARD, NANCY S. “Data-driven grasp synthesis using shape matching and task-based pruning”. IEEE Transactions on visualization and computer graphics (2007)

[20] JIANG, HANWEN, LIU, SHAOWEI, WANG, JIASHUN, and WANG, XIAOLONG. “Hand-object contact consistency reasoning for human grasps generation”. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021 2, 3.

[21] WEIYU LI∗†, XUELIN CHEN∗‡, PEIZHUO LI, OLGA SORKINE-HORNUNG, BAOQUAN CHEN. Example-based Motion Synthesis via Generative Motion Matching [J]. ACM Trans. Graph., Vol. 42, No. 4

[22] Perttu H¨am¨al¨ainen1∗, Sebastian Eriksson Esa Tanskanen, Ville Kyrki1, Jaakko Lehtinen. Online Motion Synthesis Using Sequential Monte Carlo [J]. Vol. 33, No. 4.

[23] Anindita Ghosh, Noshaba Cheema, Cennet Oguz, Christian Theobalt, and P. Slusallek. Synthesis of compositional animations from textual descriptions. In ICCV, 2021. 1, 2

[24] Guo, C.; Zou, S.; Zuo, X.; Wang, S.; Ji, W.; Li, X.; and Cheng, L. 2022. Generating Diverse and Natural 3D Human Motions from Text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5152–5161.

[25] Mohamed Hassan, Yunrong Guo, Tingwu Wang, Michael Black, Sanja Fidler, and Xue Bin Peng. 2023. Synthesizing Physical Character-Scene Interactions. (2023), 1–9.

Downloads

Published

23-11-2024

How to Cite

Xu, J. (2024). Character Motion Synthesis: A Survey. Highlights in Science, Engineering and Technology, 118, 171-178. https://doi.org/10.54097/rc042447